Publications

RSS feed available.

2019

  • P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” Journal of field robotics, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.

    @article{lirolem35535,
    publisher = {Wiley Periodicals, Inc.},
    title = {Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture},
    journal = {Journal of Field Robotics},
    year = {2019},
    author = {Petra Bosilj and Erchan Aptoula and Tom Duckett and Grzegorz Cielniak},
    abstract = {Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.},
    keywords = {ARRAY(0x558b530d4fd8)},
    url = {http://eprints.lincoln.ac.uk/35535/}
    }
  • P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Pattern spectra from different component trees for estimating soil size distribution,” in 14th international symposium on mathematical morphology, 2019.
    [BibTeX] [Abstract] [Download PDF]

    We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.

    @inproceedings{lirolem35548,
    title = {Pattern Spectra from Different Component Trees for Estimating Soil Size Distribution},
    publisher = {Springer},
    journal = {International Symposium on Mathematical Morphology},
    booktitle = {14th International Symposium on Mathematical Morphology},
    year = {2019},
    author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak},
    month = {June},
    abstract = {We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.},
    keywords = {ARRAY(0x558b52ddfb80)},
    url = {http://eprints.lincoln.ac.uk/35548/}
    }
  • C. Coppola, S. Cosar, D. R. Faria, and N. Bellotto, “Social activity recognition on continuous rgb-d video sequences,” International journal of social robotics, p. 1–15, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.

    @article{lirolem35151,
    pages = {1--15},
    year = {2019},
    author = {Claudio Coppola and Serhan Cosar and Diego R. Faria and Nicola Bellotto},
    title = {Social Activity Recognition on Continuous RGB-D Video Sequences},
    publisher = {Springer},
    journal = {International Journal of Social Robotics},
    url = {http://eprints.lincoln.ac.uk/35151/},
    keywords = {ARRAY(0x558b530fd278)},
    abstract = {Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.}
    }
  • S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.

    @article{lirolem35778,
    year = {2019},
    author = {Serhan Cosar and Nicola Bellotto},
    title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling},
    journal = {Journal of Intelligent and Robotic Systems},
    url = {http://eprints.lincoln.ac.uk/35778/},
    keywords = {ARRAY(0x558b530c2310)},
    abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, S. Choi, I. Hwang, and J. Kim, “Deep reinforcement learning for chatbots using clustered actions and human-likeness rewards,” in International joint conference on neural networks (ijcnn), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{–}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.

    @inproceedings{lirolem35954,
    year = {2019},
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Sungja Choi and Inchul Hwang and Jihie Kim},
    publisher = {IEEE},
    title = {Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards},
    month = {July},
    abstract = {Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{--}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.},
    keywords = {ARRAY(0x558b53263640)},
    url = {http://eprints.lincoln.ac.uk/35954/}
    }
  • H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, 2019.
    [BibTeX] [Abstract] [Download PDF]

    The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.

    @article{lirolem33533,
    author = {Heriberto Cuayahuitl},
    note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/},
    year = {2019},
    journal = {Neurocomputing},
    publisher = {Elsevier},
    title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots},
    url = {http://eprints.lincoln.ac.uk/33533/},
    abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.},
    keywords = {ARRAY(0x558b530ef338)}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and size estimation using a modular soft gripper with embedded flex sensors,” in International conference on intelligent robots and systems (iros 2018), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lirolem34713,
    month = {January},
    title = {Contact Detection and Size Estimation Using a Modular Soft Gripper with Embedded Flex Sensors},
    author = {Khaled Elgeneidy and Gerhard Neumann and Simon Pearson and Michael Jackson and Niels Lohse},
    year = {2019},
    booktitle = {International Conference on Intelligent Robots and Systems (IROS 2018)},
    abstract = {Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.},
    keywords = {ARRAY(0x558b530fa800)},
    url = {http://eprints.lincoln.ac.uk/34713/}
    }
  • K. Elgeneidy, P. Lightbody, S. Pearson, and G. Neumann, “Characterising 3d-printed soft fin ray robotic fingers with layer jamming capability for delicate grasping,” in Robosoft 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.

    @inproceedings{lirolem34950,
    booktitle = {RoboSoft 2019},
    year = {2019},
    author = {Khaled Elgeneidy and Peter Lightbody and Simon Pearson and Gerhard Neumann},
    title = {Characterising 3D-printed Soft Fin Ray Robotic Fingers with Layer Jamming Capability for Delicate Grasping},
    month = {June},
    url = {http://eprints.lincoln.ac.uk/34950/},
    keywords = {ARRAY(0x558b53273798)},
    abstract = {Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of
    the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.}
    }
  • C. Fox, Use and citation of paper “fox et al (2018), ?when should the chicken cross the road? game theory for autonomous vehicle – human interactions conference paper?” by the law commission to review and potentially change the law of the uk on autonomous vehicles. cited in their consultation report, “automated vehicles: a joint preliminary consultation paper” on p174, ref 651., 2019.
    [BibTeX] [Abstract] [Download PDF]

    Topic of this consultation: The Centre for Connected and Automated Vehicles (CCAV) has asked the Law Commission of England and Wales and the Scottish Law Commission to examine options for regulating automated road vehicles. It is a three-year project, running from March 2018 to March 2021. This preliminary consultation paper focuses on the safety of passenger vehicles. Driving automation refers to a broad range of vehicle technologies. Examples range from widely-used technologies that assist human drivers (such as cruise control) to vehicles that drive themselves with no human intervention. We concentrate on automated driving systems which do not need human drivers for at least part of the journey. This paper looks at are three key themes. First, we consider how safety can be assured before and after automated driving systems are deployed. Secondly, we explore criminal and civil liability. Finally, we examine the need to adapt road rules for artificial intelligence.

    @misc{lirolem34922,
    journal = {Automated Vehicles: A joint preliminary consultation paper},
    title = {Use and citation of paper "Fox et al (2018), ?When should the chicken cross the road? Game theory for autonomous vehicle - human interactions conference paper?" by the Law Commission to review and potentially change the law of the UK on autonomous vehicles. Cited in their consultation report, "Automated Vehicles: A joint preliminary consultation paper" on p174, ref 651.},
    author = {Charles Fox},
    year = {2019},
    month = {January},
    abstract = {Topic of this consultation: The Centre for Connected and Automated Vehicles (CCAV) has
    asked the Law Commission of England and Wales and the Scottish Law Commission to
    examine options for regulating automated road vehicles. It is a three-year project, running from
    March 2018 to March 2021. This preliminary consultation paper focuses on the safety of
    passenger vehicles.
    Driving automation refers to a broad range of vehicle technologies. Examples range from
    widely-used technologies that assist human drivers (such as cruise control) to vehicles that
    drive themselves with no human intervention. We concentrate on automated driving systems
    which do not need human drivers for at least part of the journey.
    This paper looks at are three key themes. First, we consider how safety can be assured before
    and after automated driving systems are deployed. Secondly, we explore criminal and civil
    liability. Finally, we examine the need to adapt road rules for artificial intelligence.},
    keywords = {ARRAY(0x558b530e72b8)},
    url = {http://eprints.lincoln.ac.uk/34922/}
    }
  • Q. Fu, H. Wang, C. Hu, and S. Yue, “Towards computational models and applications of insect visual systems for motion perception: a review,” Artificial life, vol. 25, iss. 3, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Motion perception is a critical capability determining a variety of aspects of insects’ life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects’ visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects’ visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.

    @article{lirolem35584,
    number = {3},
    journal = {Artificial life},
    publisher = {MIT Press},
    title = {Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review},
    author = {Qinbing Fu and Hongxin Wang and Cheng Hu and Shigang Yue},
    year = {2019},
    volume = {25},
    abstract = {Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.},
    keywords = {ARRAY(0x558b530e9438)},
    url = {http://eprints.lincoln.ac.uk/35584/}
    }
  • Q. Fu, N. Bellotto, H. Wang, C. F. Rind, H. Wang, and S. Yue, “A visual neural network for robust collision perception in vehicle driving scenarios,” in 15th international conference on artificial intelligence applications and innovations, 2019.
    [BibTeX] [Abstract] [Download PDF]

    This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust’s visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.

    @inproceedings{lirolem35586,
    title = {A Visual Neural Network for Robust Collision Perception in Vehicle Driving Scenarios},
    author = {Qinbing Fu and Nicola Bellotto and Huatian Wang and F. Claire Rind and Hongxin Wang and Shigang Yue},
    year = {2019},
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    month = {May},
    url = {http://eprints.lincoln.ac.uk/35586/},
    abstract = {This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust's visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.},
    keywords = {ARRAY(0x558b53240298)}
    }
  • B. Grieve, T. Duckett, M. Collison, L. Boyd, J. West, Y. Hujun, F. Arvin, and S. Pearson, “The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: a fundamental rethink is required.,” Global food security, vol. 23, p. 116–124, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology & physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).

    @article{lirolem35842,
    journal = {Global Food Security},
    publisher = {Elsevier},
    volume = {23},
    month = {December},
    title = {The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: A fundamental rethink is required.},
    author = {Bruce Grieve and Tom Duckett and Martin Collison and Lesley Boyd and Jon West and Yin Hujun and Farshad Arvin and Simon Pearson},
    year = {2019},
    pages = {116--124},
    abstract = {Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology \& physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).},
    keywords = {ARRAY(0x558b53239238)},
    url = {http://eprints.lincoln.ac.uk/35842/}
    }
  • S. Kottayil, P. Tsoleridis, K. Rossa, and C. Fox, “Investigation of driver route choice behaviour using bluetooth data,” in 15th world conference on transport research, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.

    @inproceedings{lirolem34791,
    month = {May},
    title = {Investigation of Driver Route Choice Behaviour using Bluetooth Data},
    publisher = {Elsevier},
    author = {Sreedevi Kottayil and Panagiotis Tsoleridis and Kacper Rossa and Charles Fox},
    booktitle = {15th World Conference on Transport Research},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/34791/},
    abstract = {Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.},
    keywords = {ARRAY(0x558b530ec1b0)}
    }
  • J. Lock, G. Cielniak, and N. Bellotto, “Active object search with a mobile device for people with visual impairments,” in 14th international conference on computer vision theory and applications (visapp), 2019, p. 476–485.
    [BibTeX] [Abstract] [Download PDF]

    Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera’s video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.

    @inproceedings{lirolem34596,
    title = {Active Object Search with a Mobile Device for People with Visual Impairments},
    publisher = {VISIGRAPP},
    author = {Jacobus Lock and Grzegorz Cielniak and Nicola Bellotto},
    booktitle = {14th International Conference on Computer Vision Theory and Applications (VISAPP)},
    year = {2019},
    pages = {476--485},
    abstract = {Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera's video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.},
    keywords = {ARRAY(0x558b531024d8)},
    url = {http://eprints.lincoln.ac.uk/34596/}
    }
  • L. Sun, C. Zhao, Z. Yan, P. Liu, T. Duckett, and R. Stolkin, “A novel weakly-supervised approach for rgb-d-based nuclear waste object detection and categorization,” Ieee sensors journal, vol. 19, iss. 9, p. 3487–3500, 2019.
    [BibTeX] [Abstract] [Download PDF]

    This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavail- able. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object detection and recognition application with limited human annotations.

    @article{lirolem35699,
    number = {9},
    month = {May},
    publisher = {IEEE},
    journal = {IEEE Sensors Journal},
    volume = {19},
    pages = {3487--3500},
    title = {A Novel Weakly-supervised approach for RGB-D-based Nuclear Waste Object Detection and Categorization},
    year = {2019},
    author = {Li Sun and Cheng Zhao and Zhi Yan and Pengcheng Liu and Tom Duckett and Rustam Stolkin},
    url = {http://eprints.lincoln.ac.uk/35699/},
    keywords = {ARRAY(0x558b530dcac0)},
    abstract = {This paper addresses the problem of RGBD-based detection and categorization of
    waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation
    for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a
    novel industrial application, large amounts of annotated waste object data are currently
    unavail- able. To overcome this problem, we propose a weakly-supervised learning approach which
    is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos
    while requiring very few annotations. The proposed method also has the potential to be
    applied to other household or industrial applications. We evaluate our approach on the
    Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance
    among semi-supervised methods. More importantly, we introduce a novel dataset, i.e.
    Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this
    novel industrial object recognition challenge. We further propose a complete real-time pipeline
    for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised
    approach has demonstrated to be highly effective in solving a novel RGB-D object
    detection and recognition application with limited human annotations.}
    }
  • H. Wang, J. Peng, Q. Fu, H. Wang, and S. Yue, “Visual cue integration for small target motion detection in natural cluttered backgrounds,” in The 2019 international joint conference on neural networks (ijcnn), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.

    @inproceedings{lirolem35684,
    year = {2019},
    booktitle = {The 2019 International Joint Conference on Neural Networks (IJCNN)},
    author = {Hongxin Wang and Jigen Peng and Qinbing Fu and Huatian Wang and Shigang Yue},
    title = {Visual Cue Integration for Small Target Motion Detection in Natural Cluttered Backgrounds},
    publisher = {IEEE},
    month = {July},
    url = {http://eprints.lincoln.ac.uk/35684/},
    abstract = {The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.},
    keywords = {ARRAY(0x558b531bdc58)}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, P. Baxter, C. Hu, and S. Yue, “Angular velocity estimation of image motion mimicking the honeybee tunnel centring behaviour,” in The 2019 international joint conference on neural networks, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.

    @inproceedings{lirolem35685,
    month = {July},
    year = {2019},
    booktitle = {The 2019 International Joint Conference on Neural Networks},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Paul Baxter and Cheng Hu and Shigang Yue},
    title = {Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour},
    publisher = {IEEE},
    url = {http://eprints.lincoln.ac.uk/35685/},
    keywords = {ARRAY(0x558b53256098)},
    abstract = {Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, and S. Yue, “Constant angular velocity regulation for visually guided terrain following,” in 15th international conference on artificial intelligence applications and innovations, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee’s behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles’ terrain following.

    @inproceedings{lirolem35595,
    title = {Constant Angular Velocity Regulation for Visually Guided Terrain Following},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Shigang Yue},
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    year = {2019},
    abstract = {Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee's behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles' terrain following.},
    keywords = {ARRAY(0x558b5310aac8)},
    url = {http://eprints.lincoln.ac.uk/35595/}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Learning monocular visual odometry with dense 3d mapping from dense 3d flow,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2019.
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.

    @inproceedings{lirolem36001,
    title = {Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow},
    publisher = {IEEE},
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    year = {2019},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    month = {January},
    abstract = {This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.},
    keywords = {ARRAY(0x558b530e7288)},
    url = {http://eprints.lincoln.ac.uk/36001/}
    }
  • J. Zhao, X. Ma, Q. Fu, C. Hu, and S. Yue, “An lgmd based competitive collision avoidance strategy for uav,” in The 15th international conference on artificial intelligence applications and innovations, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.

    @inproceedings{lirolem35691,
    month = {May},
    title = {An LGMD Based Competitive Collision Avoidance Strategy for UAV},
    author = {Jiannan Zhao and Xingzao Ma and Qinbing Fu and Cheng Hu and Shigang Yue},
    booktitle = {The 15th International Conference on Artificial Intelligence Applications and Innovations},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/35691/},
    abstract = {Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.},
    keywords = {ARRAY(0x558b53238b00)}
    }

2018

  • P. Bosilj, T. Duckett, and G. Cielniak, “Analysis of morphology-based features for classification of crop and weeds in precision agriculture,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 2950–2956, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.

    @article{lirolem32371,
    volume = {3},
    publisher = {IEEE},
    journal = {IEEE Robotics and Automation Letters},
    number = {4},
    month = {October},
    year = {2018},
    author = {Petra Bosilj and Tom Duckett and Grzegorz Cielniak},
    title = {Analysis of morphology-based features for classification of crop and weeds in precision agriculture},
    pages = {2950--2956},
    keywords = {ARRAY(0x558b53022d08)},
    abstract = {Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.},
    url = {http://eprints.lincoln.ac.uk/32371/}
    }
  • F. Camara, O. Giles, M. Rothmuller, P. Rasmussen, A. Vendelbo-Larsen, G. Markkula, Y-M. Lee, N. Merat, and C. Fox, “Predicting pedestrian road-crossing assertiveness for autonomous vehicle control,” in 21st ieee international conference on intelligent transportation systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ? regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game- theoretic AV controllers to inform real-time interactions.

    @inproceedings{lirolem33089,
    publisher = {IEEE},
    title = {Predicting pedestrian road-crossing assertiveness for autonomous vehicle control},
    author = {F Camara and O Giles and M Rothmuller and PH Rasmussen and A Vendelbo-Larsen and G Markkula and Y-M Lee and N Merat and Charles Fox},
    booktitle = {21st IEEE International Conference on Intelligent Transportation Systems},
    year = {2018},
    month = {November},
    abstract = {Autonomous vehicles (AVs) must interact with other
    road users including pedestrians. Unlike passive environments,
    pedestrians are active agents having their own utilities and
    decisions, which must be inferred and predicted by AVs in order
    to control interactions with them and navigation around them.
    In particular, when a pedestrian wishes to cross the road in
    front of the vehicle at an unmarked crossing, the pedestrian
    and AV must compete for the space, which may be considered
    as a game-theoretic interaction in which one agent must yield
    to the other. To inform AV controllers in this setting, this study
    collects and analyses data from real-world human road crossings
    to determine what features of crossing behaviours are predictive
    about the level of assertiveness of pedestrians and of the eventual
    winner of the interactions. It presents the largest and most
    detailed data set of its kind known to us, and new methods to
    analyze and predict pedestrian-vehicle interactions based upon
    it. Pedestrian-vehicle interactions are decomposed into sequences
    of independent discrete events. We use probabilistic methods ?
    regression and decision tree regression ? and sequence analysis
    to analyze sets and sub-sequences of actions used by both
    pedestrians and human drivers while crossing at an intersection,
    to find common patterns of behaviour and to predict the winner
    of each interaction. We report on the particular features found
    to be predictive and which can thus be integrated into game-
    theoretic AV controllers to inform real-time interactions.},
    keywords = {ARRAY(0x558b52ff1220)},
    url = {http://eprints.lincoln.ac.uk/33089/}
    }
  • H. Cuayahuitl, S. Ryu, D. Lee, and J. Kim, “A study on dialogue reward prediction for open-ended conversational agents,” in Neurips workshop on conversational ai, 2018.
    [BibTeX] [Abstract] [Download PDF]

    The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.

    @inproceedings{lirolem34433,
    publisher = {arXiv},
    title = {A Study on Dialogue Reward Prediction for Open-Ended Conversational Agents},
    booktitle = {NeurIPS Workshop on Conversational AI},
    year = {2018},
    author = {Heriberto Cuayahuitl and Seonghan Ryu and Donghyeon Lee and Jihie Kim},
    month = {December},
    keywords = {ARRAY(0x558b52fe1a98)},
    abstract = {The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.},
    url = {http://eprints.lincoln.ac.uk/34433/}
    }
  • Q. Fu, C. Hu, J. Peng, and S. Yue, “Shaping the collision selectivity in a looming sensitive neuron model with parallel on and off pathways and spike frequency adaptation,” Neural networks, vol. 106, p. 127–143, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector – the LGMD2.The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.

    @article{lirolem31536,
    month = {December},
    publisher = {Elsevier for European Neural Network Society (ENNS)},
    journal = {Neural Networks},
    volume = {106},
    pages = {127--143},
    title = {Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation},
    year = {2018},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Shigang Yue},
    url = {http://eprints.lincoln.ac.uk/31536/},
    keywords = {ARRAY(0x558b530e93f0)},
    abstract = {Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1's collision selectivity to its neighbouring looming detector -- the LGMD2.The SFA mechanism can enhance the LGMD1's collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.}
    }
  • Q. Fu, N. Bellotto, C. Hu, and S. Yue, “Performance of a visual fixation model in an autonomous micro robot inspired by drosophila physiology,” in Ieee international conference on robotics and biomimetics, 2018.
    [BibTeX] [Abstract] [Download PDF]

    In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.

    @inproceedings{lirolem33846,
    month = {December},
    author = {Qinbing Fu and Nicola Bellotto and Cheng Hu and Shigang Yue},
    year = {2018},
    booktitle = {IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS},
    title = {Performance of a Visual Fixation Model in an Autonomous Micro Robot Inspired by Drosophila Physiology},
    url = {http://eprints.lincoln.ac.uk/33846/},
    keywords = {ARRAY(0x558b52fe5a90)},
    abstract = {In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.}
    }
  • H. van Hoof, G. Neumann, and J. Peters, “Non-parametric policy search with limited information loss,” Journal of machine learning research, vol. 18, iss. 73, p. 1–46, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the dataset. Yet, many current non-parametric approaches rely on unstable greedy maximization of approximate value functions, which might lead to poor convergence or oscillations in the policy update. A more robust policy update can be obtained by limiting the information loss between successive state-action distributions. In this paper, we develop a policy search algorithm with policy updates that are both robust and non-parametric. Our method can learn non-parametric control policies for infinite horizon continuous Markov decision processes with non-linear and redundant sensory representations. We investigate how we can use approximations of the kernel function to reduce the time requirements of the demanding non-parametric computations. In our experiments, we show the strong performance of the proposed method, and how it can be approximated effi- ciently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up task directly from image data.

    @article{lirolem28020,
    publisher = {Journal of Machine Learning Research},
    journal = {Journal of Machine Learning Research},
    volume = {18},
    number = {73},
    month = {December},
    title = {Non-parametric policy search with limited information loss},
    year = {2018},
    author = {Herke van Hoof and Gerhard Neumann and Jan Peters},
    pages = {1--46},
    url = {http://eprints.lincoln.ac.uk/28020/},
    keywords = {ARRAY(0x558b52fe1ac8)},
    abstract = {Learning complex control policies from non-linear and redundant sensory input is an important
    challenge for reinforcement learning algorithms. Non-parametric methods that
    approximate values functions or transition models can address this problem, by adapting
    to the complexity of the dataset. Yet, many current non-parametric approaches rely on
    unstable greedy maximization of approximate value functions, which might lead to poor
    convergence or oscillations in the policy update. A more robust policy update can be obtained
    by limiting the information loss between successive state-action distributions. In this
    paper, we develop a policy search algorithm with policy updates that are both robust and
    non-parametric. Our method can learn non-parametric control policies for infinite horizon
    continuous Markov decision processes with non-linear and redundant sensory representations.
    We investigate how we can use approximations of the kernel function to reduce the
    time requirements of the demanding non-parametric computations. In our experiments, we
    show the strong performance of the proposed method, and how it can be approximated effi-
    ciently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up
    task directly from image data.}
    }
  • S. Indurthi, S. Yu, S. Back, and H. Cuayahuitl, “Cut to the chase: a context zoom-in network for reading comprehension,” in Empirical methods in natural language processing (emnlp), 2018.
    [BibTeX] [Abstract] [Download PDF]

    In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.

    @inproceedings{lirolem34105,
    author = {Satish Indurthi and Seunghak Yu and Seohyun Back and Heriberto Cuayahuitl},
    booktitle = {Empirical Methods in Natural Language Processing (EMNLP)},
    year = {2018},
    publisher = {Association for Computational Linguistics},
    title = {Cut to the Chase: A Context Zoom-in Network for Reading Comprehension},
    month = {October},
    url = {http://eprints.lincoln.ac.uk/34105/},
    abstract = {In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.},
    keywords = {ARRAY(0x558b52ff1268)}
    }
  • H. Wang, S. Yue, J. Peng, P. Baxter, C. Zhang, and Z. Wang, “A model for detection of angular velocity of image motion based on the temporal tuning of the drosophila,” in Icann 2018, 2018, p. 37–46.
    [BibTeX] [Abstract] [Download PDF]

    We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.

    @inproceedings{lirolem33104,
    author = {Huatian Wang and Shigang Yue and Jigen Peng and Paul Baxter and Chun Zhang and Zhihua Wang},
    booktitle = {ICANN 2018},
    year = {2018},
    title = {A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila},
    publisher = {Springer, Cham},
    pages = {37--46},
    month = {December},
    url = {http://eprints.lincoln.ac.uk/33104/},
    keywords = {ARRAY(0x558b5300c3c8)},
    abstract = {We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.}
    }
  • H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transaction on cybernetics, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

    @article{lirolem33420,
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659},
    year = {2018},
    journal = {IEEE Transaction on Cybernetics},
    title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds},
    publisher = {IEEE},
    month = {October},
    url = {http://eprints.lincoln.ac.uk/33420/},
    keywords = {ARRAY(0x558b5301ab08)},
    abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.}
    }
  • J. Zhao, C. Hu, C. Zhang, Z. Wang, and S. Yue, “A bio-inspired collision detector for small quadcopter,” in 2018 international joint conference on neural networks (ijcnn), 2018, p. 1–7.
    [BibTeX] [Abstract] [Download PDF]

    The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU). Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter’s responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter’s collision avoidance task.

    @inproceedings{lirolem34847,
    title = {A Bio-inspired Collision Detector for Small Quadcopter},
    publisher = {IEEE},
    author = {Jiannan Zhao and Cheng Hu and Chun Zhang and Zhihua Wang and Shigang Yue},
    booktitle = {2018 International Joint Conference on Neural Networks (IJCNN)},
    year = {2018},
    month = {October},
    pages = {1--7},
    url = {http://eprints.lincoln.ac.uk/34847/},
    abstract = {The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU).
    Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter's responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter's collision avoidance task.},
    keywords = {ARRAY(0x558b52ffd020)}
    }