Publications

2017

  • N. Bellotto, M. Fernandez-Carmona, and S. Cosar, “ENRICHME integration of ambient intelligence and robotics for AAL,” in Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing (AAAI 2017 Spring Symposium), 2017.
    [BibTeX] [Abstract] [EPrints]

    Technological advances and affordability of recent smart sensors, as well as the consolidation of common software platforms for the integration of the latter and robotic sensors, are enabling the creation of complex active and assisted living environments for improving the quality of life of the elderly and the less able people. One such example is the integrated system developed by the European project ENRICHME, the aim of which is to monitor and prolong the independent living of old people affected by mild cognitive impairments with a combination of smart-home, robotics and web technologies. This paper presents in particular the design and technological solutions adopted to integrate, process and store the information provided by a set of fixed smart sensors and mobile robot sensors in a domestic scenario, including presence and contact detectors, environmental sensors, and RFID-tagged objects, for long-term user monitoring and

    @inproceedings{lirolem25362,
           booktitle = {Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing (AAAI 2017 Spring Symposium)},
               month = {March},
               title = {ENRICHME integration of ambient intelligence and robotics for AAL},
              author = {Nicola Bellotto and Manuel Fernandez-Carmona and Serhan Cosar},
           publisher = {AAAI},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166410)},
                 url = {http://eprints.lincoln.ac.uk/25362/},
            abstract = {Technological advances and affordability of recent smart sensors, as well as the consolidation of common software platforms for the integration of the latter and robotic sensors, are enabling the creation of complex active and assisted living environments for improving the quality of life of the elderly and the less able people. One such example is the integrated system developed by the European project ENRICHME, the aim of which is to monitor and prolong the independent living of old people affected by mild cognitive impairments with a combination of smart-home, robotics and web technologies. This paper presents in particular the design and technological solutions adopted to integrate, process and store the information provided by a set of fixed smart sensors and mobile robot sensors in a domestic scenario, including presence and contact detectors, environmental sensors, and RFID-tagged objects, for long-term user monitoring and}
    }
  • S. Cosar, C. Coppola, and N. Bellotto, “Volume-based human re-identification with RGB-D cameras,” in VISAPP – International Conference on Computer Vision Theory and Applications, 2017.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an RGB-D based human re-identification approach using novel biometrics features from the body’s volume. Existing work based on RGB images or skeleton features have some limitations for real-world robotic applications, most notably in dealing with occlusions and orientation of the user. Here, we propose novel features that allow performing re-identification when the person is facing side/backward or the person is partially occluded. The proposed approach has been tested for various scenarios including different views, occlusion and the public BIWI RGBD-ID dataset.

    @inproceedings{lirolem25360,
           booktitle = {VISAPP - International Conference on Computer Vision Theory and Applications},
               month = {February},
               title = {Volume-based human re-identification with RGB-D cameras},
              author = {Serhan Cosar and Claudio Coppola and Nicola Bellotto},
                year = {2017},
            keywords = {ARRAY(0x7fdc781664a0)},
                 url = {http://eprints.lincoln.ac.uk/25360/},
            abstract = {This paper presents an RGB-D based human re-identification approach using novel biometrics features from the body's volume. Existing work based on RGB images or skeleton features have some limitations for real-world robotic applications, most notably in dealing with occlusions and orientation of the user. Here, we propose novel features that allow performing re-identification when the person is facing side/backward or the person is partially occluded. The proposed approach has been tested for various scenarios including different views, occlusion and the public BIWI RGBD-ID dataset.}
    }
  • H. Cuayahuitl, S. Yu, A. Williamson, and J. Carse, “Scaling up deep reinforcement learning for multi-domain dialogue systems,” in International Joint Conference on Neural Networks (IJCNN), 2017.
    [BibTeX] [Abstract] [EPrints]

    Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning–termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.

    @inproceedings{lirolem26622,
           booktitle = {International Joint Conference on Neural Networks (IJCNN)},
               month = {May},
               title = {Scaling up deep reinforcement learning for multi-domain dialogue systems},
              author = {Heriberto Cuayahuitl and Seunghak Yu and Ashley Williamson and Jacob Carse},
           publisher = {IEEE},
                year = {2017},
            keywords = {ARRAY(0x7fdc781663e0)},
                 url = {http://eprints.lincoln.ac.uk/26622/},
            abstract = {Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning{--}termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN
    (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is
    promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.}
    }
  • Q. Fu and S. Yue, “Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background,” in The 2017 International Joint Conference on Neural Networks (IJCNN 2017), 2017.
    [BibTeX] [Abstract] [EPrints]

    The nature endows animals robustvision systems for extracting and recognizing differentmotion cues, detectingpredators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In thispaper, with respectto recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is ?ltered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: ?rst, the neural network ful?lled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in visionbased micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector.

    @inproceedings{lirolem26619,
           booktitle = {The 2017 International Joint Conference on Neural Networks (IJCNN 2017)},
               month = {May},
               title = {Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background},
              author = {Qinbing Fu and Shigang Yue},
                year = {2017},
            keywords = {ARRAY(0x7fdc78052b90)},
                 url = {http://eprints.lincoln.ac.uk/26619/},
            abstract = {The nature endows animals robustvision systems for extracting and recognizing differentmotion cues, detectingpredators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In thispaper, with respectto recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is ?ltered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: ?rst, the neural network ful?lled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in visionbased micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector.}
    }
  • M. Hanheide, M. Göbelbecker, G. S. Horn, A. Pronobis, K. Sjöö, A. Aydemir, P. Jensfelt, C. Gretton, R. Dearden, M. Janicek, H. Zender, G. Kruijff, N. Hawes, and J. L. Wyatt, “Robot task planning and explanation in open and uncertain worlds,” Artificial Intelligence, 2017.
    [BibTeX] [Abstract] [EPrints]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot’s knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot’s actions can have: epistemic effects (I believe X because I saw it) and assumptions (I’ll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

    @article{lirolem18592,
               month = {December},
               title = {Robot task planning and explanation in open and uncertain worlds},
              author = {Marc Hanheide and Moritz G{\"o}belbecker and Graham S. Horn and Andrzej Pronobis and Kristoffer Sj{\"o}{\"o} and Alper Aydemir and Patric Jensfelt and Charles Gretton and Richard Dearden and Miroslav Janicek and Hendrik Zender and Geert-Jan Kruijff and Nick Hawes and Jeremy L. Wyatt},
           publisher = {Elsevier},
                year = {2017},
             journal = {Artificial Intelligence},
            keywords = {ARRAY(0x7fdc78052b18)},
                 url = {http://eprints.lincoln.ac.uk/18592/},
            abstract = {A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.}
    }
  • M. Hanheide, D. Hebesberger, and T. Krajnik, “The when, where, and how: an adaptive robotic info-terminal for care home residents ? a long-term study,” in Int. Conf. on Human-Robot Interaction (HRI), Vienna, 2017.
    [BibTeX] [Abstract] [EPrints]

    Adapting to users’ intentions is a key requirement for autonomous robots in general, and in care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users’ needs. Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only then when and where are relevant, but also the way how the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting. Our results back our hypotheses, (i) that learning a spatiotemporal model of users’ intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered.

    @inproceedings{lirolem25866,
           booktitle = {Int. Conf. on Human-Robot Interaction (HRI)},
               month = {March},
               title = {The when, where, and how: an adaptive robotic info-terminal for care home residents ? a long-term study},
              author = {Marc Hanheide and Denise Hebesberger and Tomas Krajnik},
             address = {Vienna},
           publisher = {ACM/IEEE},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166458)},
                 url = {http://eprints.lincoln.ac.uk/25866/},
            abstract = {Adapting to users' intentions is a key requirement for autonomous robots in general, and in care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users' needs. Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only then when and where are relevant, but also the way how the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting. Our results back our hypotheses, (i) that learning a spatiotemporal model of users' intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered.}
    }
  • D. Hebesberger, C. Dondrup, C. Gisinger, and M. Hanheide, “Patterns of use: how older adults with progressed dementia interact with a robot,” in Proc ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI) Late Breaking Reports, Vienna, 2017.
    [BibTeX] [Abstract] [EPrints]

    Older adults represent a new user group of robots that are deployed in their private homes or in care facilities. In the presented study tangible aspects of older adults’ interaction with an autonomous robot were focused. The robot was deployed as a companion in physical therapy for older adults with progressed dementia. Interaction was possible via a mounted touch screen. The menu was structured in a single layer and icons were big and with strong contrast. Employing a detailed observation protocol, interaction frequencies and contexts were assessed. Thereby, it was found that most of the interaction was encouraged by the therapists and that two out of 12 older adults with progressed dementia showed self-inducted interactions.

    @inproceedings{lirolem25867,
           booktitle = {Proc ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI) Late Breaking Reports},
               month = {March},
               title = {Patterns of use: how older adults with progressed dementia interact with a robot},
              author = {Denise Hebesberger and Christian Dondrup and Christoph Gisinger and Marc Hanheide},
             address = {Vienna},
           publisher = {ACM/IEEE},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166470)},
                 url = {http://eprints.lincoln.ac.uk/25867/},
            abstract = {Older adults represent a new user group of robots that are deployed in their private homes or in care facilities. In the presented study tangible aspects of older adults' interaction with an autonomous robot were focused. The robot was deployed as a companion in physical therapy for older adults with progressed dementia. Interaction was possible via a mounted touch screen. The menu was structured in a single layer and icons were big and with strong contrast. Employing a detailed observation protocol, interaction frequencies and contexts were assessed. Thereby, it was found that most of the interaction was encouraged by the therapists and that two out of 12 older adults with progressed dementia showed self-inducted interactions.}
    }
  • B. Hu, S. Yue, and Z. Zhang, “A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing,” IEEE Transactions on Neural Networks and Learning Systems, 2017.
    [BibTeX] [Abstract] [EPrints]

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

    @article{lirolem24936,
               month = {December},
               title = {A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing},
              author = {Bin Hu and Shigang Yue and Zhuhong Zhang},
           publisher = {IEEE},
                year = {2017},
             journal = {IEEE Transactions on Neural Networks and Learning Systems},
            keywords = {ARRAY(0x7fdc780517a0)},
                 url = {http://eprints.lincoln.ac.uk/24936/},
            abstract = {All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.}
    }
  • S. Keizer, M. Guhe, H. Cuayahuitl, I. Efstathiou, K. Engelbrecht, M. Dobre, A. Lascarides, and O. Lemon, “Evaluating persuasion strategies and deep reinforcement learning methods for negotiation dialogue agents,” in 15th Conference of the European chapter of the Association for Computational Linguistics, 2017.
    [BibTeX] [Abstract] [EPrints]

    In this paper we present a comparative evaluation of various negotiation strategies within an online version of the game ?Settlers of Catan?. The comparison is based on human subjects playing games against artificial game-playing agents (?bots?) which implement different negotiation dialogue strategies, using a chat dialogue interface to negotiate trades. Our results suggest that a negotiation strategy that uses persuasion, as well as a strategy that is trained from data using Deep Reinforcement Learning, both lead to an improved win rate against humans, compared to previous rule-based and supervised learning baseline dialogue negotiators.

    @inproceedings{lirolem26621,
           booktitle = {15th Conference of the European chapter of the Association for Computational Linguistics},
               month = {April},
               title = {Evaluating persuasion strategies and deep reinforcement learning methods for negotiation dialogue agents},
              author = {Simon Keizer and Markus Guhe and Heriberto Cuayahuitl and Ioannis Efstathiou and Klaus-Peter Engelbrecht and Mihai Dobre and Alex Lascarides and Oliver Lemon},
           publisher = {ACL},
                year = {2017},
            keywords = {ARRAY(0x7fdc78052b30)},
                 url = {http://eprints.lincoln.ac.uk/26621/},
            abstract = {In this paper we present a comparative evaluation of various negotiation strategies within an online version of the game ?Settlers of Catan?. The comparison is based on human subjects playing games against artificial game-playing
    agents (?bots?) which implement different negotiation dialogue strategies, using a chat dialogue interface to negotiate trades. Our results suggest that a negotiation strategy that uses persuasion, as well as a strategy that is trained from data using Deep Reinforcement Learning, both lead to an improved win rate against humans, compared to previous rule-based and supervised learning baseline dialogue negotiators.}
    }
  • J. Kennedy, P. Baxter, and T. Belpaeme, “Nonverbal immediacy as a characterisation of social behaviour for human-robot interaction,” International Journal of Social Robotics, 2017.
    [BibTeX] [Abstract] [EPrints]

    An increasing amount of research has started to explore the impact of robot social behaviour on the outcome of a goal for a human interaction partner, such as cognitive learning gains. However, it remains unclear from what principles the social behaviour for such robots should be derived. Human models are often used, but in this paper an alternative approach is proposed. First, the concept of nonverbal immediacy from the communication literature is introduced, with a focus on how it can provide a characterisation of social behaviour, and the subsequent outcomes of such behaviour. A literature review is conducted to explore the impact on learning of the social cues which form the nonverbal immediacy measure. This leads to the production of a series of guidelines for social robot behaviour. The resulting behaviour is evaluated in a more general context, where both children and adults judge the immediacy of humans and robots in a similar manner, and their recall of a short story is tested. Children recall more of the story when the robot is more immediate, which demonstrates an e?ffect predicted by the literature. This study provides validation for the application of nonverbal immediacy to child-robot interaction. It is proposed that nonverbal immediacy measures could be used as a means of characterising robot social behaviour for human-robot interaction.

    @article{lirolem24215,
               month = {December},
               title = {Nonverbal immediacy as a characterisation of social behaviour for human-robot interaction},
              author = {James Kennedy and Paul Baxter and Tony Belpaeme},
           publisher = {Springer},
                year = {2017},
             journal = {International Journal of Social Robotics},
            keywords = {ARRAY(0x7fdc78052c20)},
                 url = {http://eprints.lincoln.ac.uk/24215/},
            abstract = {An increasing amount of research has started
    to explore the impact of robot social behaviour on the
    outcome of a goal for a human interaction partner, such
    as cognitive learning gains. However, it remains unclear
    from what principles the social behaviour for such robots
    should be derived. Human models are often used, but
    in this paper an alternative approach is proposed. First,
    the concept of nonverbal immediacy from the communication
    literature is introduced, with a focus on how it
    can provide a characterisation of social behaviour, and
    the subsequent outcomes of such behaviour. A literature
    review is conducted to explore the impact on learning
    of the social cues which form the nonverbal immediacy
    measure. This leads to the production of a series
    of guidelines for social robot behaviour. The resulting
    behaviour is evaluated in a more general context, where
    both children and adults judge the immediacy of humans
    and robots in a similar manner, and their recall of
    a short story is tested. Children recall more of the story
    when the robot is more immediate, which demonstrates
    an e?ffect predicted by the literature. This study provides
    validation for the application of nonverbal immediacy
    to child-robot interaction. It is proposed that nonverbal
    immediacy measures could be used as a means of
    characterising robot social behaviour for human-robot
    interaction.}
    }
  • T. Krajnik, P. Cristoforis, K. Kusumam, P. Neubert, and T. Duckett, “Image features for visual teach-and-repeat navigation in changing environments,” Robotics and Autonomous Systems, 2017.
    [BibTeX] [Abstract] [EPrints]

    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate.

    @article{lirolem25239,
               month = {December},
               title = {Image features for visual teach-and-repeat navigation in changing environments},
              author = {Tomas Krajnik and Pablo Cristoforis and Keerthy Kusumam and Peer Neubert and Tom Duckett},
           publisher = {Elsevier},
                year = {2017},
             journal = {Robotics and Autonomous Systems},
            keywords = {ARRAY(0x7fdc78052bd8)},
                 url = {http://eprints.lincoln.ac.uk/25239/},
            abstract = {We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate.}
    }
  • D. Liciotti, T. Duckett, N. Bellotto, E. Frontoni, and P. Zingaretti, “HMM-based activity recognition with a ceiling RGB-D camera,” in ICPRAM – 6th International Conference on Pattern Recognition Applications and Methods, 2017.
    [BibTeX] [Abstract] [EPrints]

    Automated recognition of Activities of Daily Living allows to identify possible health problems and apply corrective strategies in Ambient Assisted Living (AAL). Activities of Daily Living analysis can provide very useful information for elder care and long-term care services. This paper presents an automated RGB-D video analysis system that recognises human ADLs activities, related to classical daily actions. The main goal is to predict the probability of an analysed subject action. Thus, the abnormal behaviour can be detected. The activity detection and recognition is performed using an affordable RGB-D camera. Human activities, despite their unstructured nature, tend to have a natural hierarchical structure; for instance, generally making a coffee involves a three-step process of turning on the coffee machine, putting sugar in cup and opening the fridge for milk. Action sequence recognition is then handled using a discriminative Hidden Markov Model (HMM). RADiaL, a dataset with RGB-D images and 3D position of each person for training as well as evaluating the HMM, has been built and made publicly available.

    @inproceedings{lirolem25361,
           booktitle = {ICPRAM - 6th International Conference on Pattern Recognition Applications and Methods},
               month = {February},
               title = {HMM-based activity recognition with a ceiling RGB-D camera},
              author = {Daniele Liciotti and Tom Duckett and Nicola Bellotto and Emanuele Frontoni and Primo Zingaretti},
                year = {2017},
            keywords = {ARRAY(0x7fdc781664d0)},
                 url = {http://eprints.lincoln.ac.uk/25361/},
            abstract = {Automated recognition of Activities of Daily Living allows to identify possible health problems and apply corrective strategies in Ambient Assisted Living (AAL). Activities of Daily Living analysis can provide very useful information for elder care and long-term care services. This paper presents an automated RGB-D video analysis system that recognises human ADLs activities, related to classical daily actions. The main goal is to predict the probability of an analysed subject action. Thus, the abnormal behaviour can be detected. The activity detection and recognition is performed using an affordable RGB-D camera. Human activities, despite their unstructured nature, tend to have a natural hierarchical structure; for instance, generally making a coffee involves a three-step process of turning on the coffee machine, putting sugar in cup and opening the fridge for milk. Action sequence recognition is then handled using a discriminative Hidden Markov Model (HMM). RADiaL, a dataset with RGB-D images and 3D position of each person for training as well as evaluating the HMM, has been built and made publicly available.}
    }
  • P. Lightbody, M. Hanheide, and T. Krajnik, “A versatile high-performance visual fiducial marker detection system with scalable identity encoding,” in 32nd ACM Symposium on Applied Computing, 2017, pp. 1-7.
    [BibTeX] [Abstract] [EPrints]

    Fiducial markers have a wide field of applications in robotics, ranging from external localisation of single robots or robotic swarms, over self-localisation in marker-augmented environments, to simplifying perception by tagging objects in a robot?s surrounding. We propose a new family of circular markers allowing for a computationally efficient detection, identification and full 3D position estimation. A key concept of our system is the separation of the detection and identification steps, where the first step is based on a computationally efficient circular marker detection, and the identification step is based on an open-ended ?Necklace code?, which allows for a theoretically infinite number of individually identifiable markers. The experimental evaluation of the system on a real robot indicates that while the proposed algorithm achieves similar accuracy to other state-of-the-art methods, it is faster by two orders of magnitude and it can detect markers from longer distances.

    @inproceedings{lirolem25828,
           booktitle = {32nd ACM Symposium on Applied Computing},
               month = {April},
               title = {A versatile high-performance visual fiducial marker detection system with scalable identity encoding},
              author = {Peter Lightbody and Marc Hanheide and Tomas Krajnik},
           publisher = {Association for Computing Machinery},
                year = {2017},
               pages = {1--7},
            keywords = {ARRAY(0x7fdc78052b48)},
                 url = {http://eprints.lincoln.ac.uk/25828/},
            abstract = {Fiducial markers have a wide field of applications in robotics, ranging from external localisation of single robots or robotic swarms, over self-localisation in marker-augmented environments, to simplifying perception by tagging objects in a robot?s surrounding. We propose a new family of circular markers allowing for a computationally efficient detection, identification and full 3D position estimation. A key concept of our system is the separation of the detection and identification steps, where the first step is based on a computationally efficient circular marker detection, and the identification step is based on an open-ended ?Necklace code?, which allows for a theoretically infinite number of individually identifiable markers. The experimental evaluation of the system on a real robot indicates that while the proposed algorithm achieves similar accuracy to other state-of-the-art methods, it is faster by two orders of magnitude and it can detect markers from longer distances.}
    }
  • J. Lock, G. Cielniak, and N. Bellotto, “Portable navigations system with adaptive multimodal interface for the blind,” in AAAI 2017 Spring Symposium – Designing the User Experience of Machine Learning Systems, 2017.
    [BibTeX] [Abstract] [EPrints]

    Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind. Nowadays almost every smart-phone and tablet is equipped with high resolutions cameras, which are typically used for photos and videos, communication purposes, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the ?Active Vision with Human-in-the-Loop for the Visually Impaired? (ActiVis) project aims to develop a novel electronic travel aid to tackle the ?last 10 yards problem? and enable the autonomous navigation of blind users in unknown environments, ultimately enhancing or replacing existing solutions, such as guide dogs and white canes. This paper describes some of the key project?s challenges, in particular with respect to the design of the user interface that translate visual information from the camera to guiding instructions for the blind person, taking into account limitations due to the visual impairment and proposing a multimodal interface that embeds human-machine co-adaptation.

    @inproceedings{lirolem25413,
           booktitle = {AAAI 2017 Spring Symposium - Designing the User Experience of Machine Learning Systems},
               month = {March},
               title = {Portable navigations system with adaptive multimodal interface for the blind},
              author = {Jacobus Lock and Grzegorz Cielniak and Nicola Bellotto},
           publisher = {AAAI},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166428)},
                 url = {http://eprints.lincoln.ac.uk/25413/},
            abstract = {Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind. Nowadays almost every smart-phone and tablet is equipped with high resolutions cameras, which are typically used for photos and videos, communication purposes, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the ?Active Vision with Human-in-the-Loop for the Visually Impaired? (ActiVis) project aims to develop a novel electronic travel aid to tackle the ?last 10 yards problem? and enable the autonomous navigation of blind users in unknown environments, ultimately enhancing or replacing existing solutions, such as guide dogs and white canes. This paper describes some of the key project?s challenges, in particular with respect to the design of the user interface that translate visual information from the camera to guiding instructions for the blind person, taking into account limitations due to the visual impairment and proposing a multimodal interface that embeds human-machine co-adaptation.}
    }
  • M. Mangan, S. Schwarz, B. Webb, A. Wystach, and J. Zeil, “How ants use vision when homing backward,” Current Biology, vol. 27, pp. 1-7, 2017.
    [BibTeX] [Abstract] [EPrints]

    Ants can navigate over long distances between their nest and food sites using visual cues [1 and 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3, 4 and 5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2 and 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories.

    @article{lirolem25891,
              volume = {27},
               month = {February},
              author = {Michael Mangan and Sebastian Schwarz and Barbara Webb and Antoine Wystach and Jochen Zeil},
               title = {How ants use vision when homing backward},
           publisher = {Cell},
             journal = {Current Biology},
               pages = {1--7},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166500)},
                 url = {http://eprints.lincoln.ac.uk/25891/},
            abstract = {Ants can navigate over long distances between their nest and food sites using visual cues [1 and 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3, 4 and 5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2 and 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories.}
    }
  • J. M. Santos, T. Krajník, and T. Duckett, “Spatio-temporal exploration strategies for long-term autonomy of mobile robots,” Robotics and Autonomous Systems, 2017.
    [BibTeX] [Abstract] [EPrints]

    We present a study of spatio-temporal environment representations and exploration strategies for long-term deployment of mobile robots in real-world, dynamic environments. We propose a new concept for life-long mobile robot spatio-temporal exploration that aims at building, updating and maintaining the environment model during the long-term deployment. The addition of the temporal dimension to the explored space makes the exploration task a never-ending data-gathering process, which we address by application of information-theoretic exploration techniques to world representations that model the uncertainty of environment states as probabilistic functions of time. We evaluate the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months. The combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of continuously changing environments, enabling efficient and self-improving long-term operation of mobile robots.

    @article{lirolem25412,
               month = {December},
               title = {Spatio-temporal exploration strategies for long-term autonomy of mobile robots},
              author = {Jo{\~a}o Machado Santos and Tom{\'a}{\vs} Krajn{\'i}k and Tom Duckett},
                year = {2017},
             journal = {Robotics and Autonomous Systems},
            keywords = {ARRAY(0x7fdc78051788)},
                 url = {http://eprints.lincoln.ac.uk/25412/},
            abstract = {We present a study of spatio-temporal environment representations and exploration strategies for long-term deployment of mobile robots in real-world, dynamic environments.
    We propose a new concept for life-long mobile robot spatio-temporal exploration that aims at building, updating and maintaining the environment model during the long-term deployment.
    The addition of the temporal dimension to the explored space makes the exploration task a never-ending data-gathering process, which we address by application of information-theoretic exploration techniques to world representations that model the uncertainty of environment states as probabilistic functions of time.
    We evaluate the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months.
    The combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of continuously changing environments, enabling efficient and self-improving long-term operation of mobile robots.}
    }
  • J. Xu, S. Yue, F. Menchinelli, and K. Guo, “What has been missed for predicting human attention in viewing driving clips?,” PeerJ, vol. 5, p. e2946, 2017.
    [BibTeX] [Abstract] [EPrints]

    Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the ?ground truth? to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.

    @article{lirolem25963,
              volume = {5},
               month = {February},
              author = {Jiawei Xu and Shigang Yue and Federica Menchinelli and Kun Guo},
               title = {What has been missed for predicting human attention in viewing driving clips?},
           publisher = {PeerJ},
             journal = {PeerJ},
               pages = {e2946},
                year = {2017},
            keywords = {ARRAY(0x7fdc78166530)},
                 url = {http://eprints.lincoln.ac.uk/25963/},
            abstract = {Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the ?ground truth? to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.}
    }

2016

  • P. B. Ardin, M. Mangan, and B. Webb, “Ant homing ability Is not diminished when traveling backwards,” Frontiers in Behavioral Neuroscience, vol. 10, 2016.
    [BibTeX] [Abstract] [EPrints]

    Ants are known to be capable of homing to their nest after displacement to a novel location. This is widely assumed to involve some form of retinotopic matching between their current view and previously experienced views. One simple algorithm proposed to explain this behavior is continuous retinotopic alignment, in which the ant constantly adjusts its heading by rotating to minimize the pixel-wise difference of its current view from all views stored while facing the nest. However, ants with large prey items will often drag them home while facing backwards. We tested whether displaced ants (Myrmecia croslandi) dragging prey could still home despite experiencing an inverted view of their surroundings under these conditions. Ants moving backwards with food took similarly direct paths to the nest as ants moving forward without food, demonstrating that continuous retinotopic alignment is not a critical component of homing. It is possible that ants use initial or intermittent retinotopic alignment, coupled with some other direction stabilizing cue that they can utilize when moving backward. However, though most ants dragging prey would occasionally look toward the nest, we observed that their heading direction was not noticeably improved afterwards. We assume ants must use comparison of current and stored images for corrections of their path, but suggest they are either able to chose the appropriate visual memory for comparison using an additional mechanism; or can make such comparisons without retinotopic alignment.

    @article{lirolem23591,
              volume = {10},
               month = {April},
               title = {Ant homing ability Is not diminished when traveling backwards},
              author = {Paul B. Ardin and Michael Mangan and Barbara Webb},
           publisher = {Frontiers Media SA},
                year = {2016},
             journal = {Frontiers in Behavioral Neuroscience},
            keywords = {ARRAY(0x7fdc78166a40)},
                 url = {http://eprints.lincoln.ac.uk/23591/},
            abstract = {Ants are known to be capable of homing to their nest after displacement to a novel location. This is widely assumed to involve some form of retinotopic matching between their current view and previously experienced views. One simple algorithm proposed to explain this behavior is continuous retinotopic alignment, in which the ant constantly adjusts its heading by rotating to minimize the pixel-wise difference of its current view from all views stored while facing the nest. However, ants with large prey items will often drag them home while facing backwards. We tested whether displaced ants (Myrmecia croslandi) dragging prey could still home despite experiencing an inverted view of their surroundings under these conditions. Ants moving backwards with food took similarly direct paths to the nest as ants moving forward without food, demonstrating that continuous retinotopic alignment is not a critical component of homing. It is possible that ants use initial or intermittent retinotopic alignment, coupled with some other direction stabilizing cue that they can utilize when moving backward. However, though most ants dragging prey would occasionally look toward the nest, we observed that their heading direction was not noticeably improved afterwards. We assume ants must use comparison of current and stored images for corrections of their path, but suggest they are either able to chose the appropriate visual memory for comparison using an additional mechanism; or can make such comparisons without retinotopic alignment.}
    }
  • P. Ardin, F. Peng, M. Mangan, K. Lagogiannis, and B. Webb, “Using an insect mushroom body circuit to encode route memory in complex natural environments,” PLoS Computational Biology, vol. 12, iss. 2, p. e1004683, 2016.
    [BibTeX] [Abstract] [EPrints]

    Ants, like many other animals, use visual memory to follow extended routes through complex environments, but it is unknown how their small brains implement this capability. The mushroom body neuropils have been identified as a crucial memory circuit in the insect brain, but their function has mostly been explored for simple olfactory association tasks. We show that a spiking neural model of this circuit originally developed to describe fruitfly (Drosophila melanogaster) olfactory association, can also account for the ability of desert ants (Cataglyphis velox) to rapidly learn visual routes through complex natural environments. We further demonstrate that abstracting the key computational principles of this circuit, which include one-shot learning of sparse codes, enables the theoretical storage capacity of the ant mushroom body to be estimated at hundreds of independent images.

    @article{lirolem23571,
              volume = {12},
              number = {2},
               month = {February},
              author = {Paul Ardin and Fei Peng and Michael Mangan and Konstantinos Lagogiannis and Barbara Webb},
               title = {Using an insect mushroom body circuit to encode route memory in complex natural environments},
           publisher = {Public Library of Science for International Society for Computational Biology (ISCB)},
                year = {2016},
             journal = {PLoS Computational Biology},
               pages = {e1004683},
            keywords = {ARRAY(0x7fdc78166b00)},
                 url = {http://eprints.lincoln.ac.uk/23571/},
            abstract = {Ants, like many other animals, use visual memory to follow extended routes through complex
    environments, but it is unknown how their small brains implement this capability. The
    mushroom body neuropils have been identified as a crucial memory circuit in the insect
    brain, but their function has mostly been explored for simple olfactory association tasks. We
    show that a spiking neural model of this circuit originally developed to describe fruitfly (Drosophila
    melanogaster) olfactory association, can also account for the ability of desert ants
    (Cataglyphis velox) to rapidly learn visual routes through complex natural environments. We
    further demonstrate that abstracting the key computational principles of this circuit, which
    include one-shot learning of sparse codes, enables the theoretical storage capacity of the
    ant mushroom body to be estimated at hundreds of independent images.}
    }
  • F. Arvin, A. E. Turgut, T. Krajnik, and S. Yue, “Investigation of cue-based aggregation in static and dynamic environments with a mobile robot swarm,” Adaptive Behavior, vol. 24, iss. 2, pp. 102-118, 2016.
    [BibTeX] [Abstract] [EPrints]

    Aggregation is one of the most fundamental behaviors that has been studied in swarm robotic researches for more than two decades. The studies in biology revealed that environment is a preeminent factor in especially cue-based aggregation that can be defined as aggregation at a particular location which is a heat or a light source acting as a cue indicating an optimal zone. In swarm robotics, studies on cue-based aggregation mainly focused on different methods of aggregation and different parameters such as population size. Although of utmost importance, environmental effects on aggregation performance have not been studied systematically. In this paper, we study the effects of different environmental factors; size, texture and number of cues in a static setting and moving cues in a dynamic setting using real robots. We used aggregation time and size of the aggregate as the two metrics to measure aggregation performance. We performed real robot experiments with different population sizes and evaluated the performance of aggregation using the defined metrics. We also proposed a probabilistic aggregation model and predicted the aggregation performance accurately in most of the settings. The results of the experiments show that environmental conditions affect the aggregation performance considerably and have to be studied in depth.

    @article{lirolem22466,
              volume = {24},
              number = {2},
               month = {April},
              author = {Farshad Arvin and Ali Emre Turgut and Tomas Krajnik and Shigang Yue},
               title = {Investigation of cue-based aggregation in static and dynamic environments with a mobile robot swarm},
           publisher = {SAGE},
                year = {2016},
             journal = {Adaptive Behavior},
               pages = {102--118},
            keywords = {ARRAY(0x7fdc78166a70)},
                 url = {http://eprints.lincoln.ac.uk/22466/},
            abstract = {Aggregation is one of the most fundamental behaviors that has been studied in swarm robotic researches for more than two decades. The studies in biology revealed that environment is a preeminent factor in especially cue-based aggregation that can be defined as aggregation at a particular location which is a heat or a light source acting as a cue indicating an optimal zone. In swarm robotics, studies on cue-based aggregation mainly focused on different methods of aggregation and different parameters such as population size. Although of utmost importance, environmental effects on aggregation performance have not been studied systematically. In this paper, we study the effects of different environmental factors; size, texture and number of cues in a static setting and moving cues in a dynamic setting using real robots. We used aggregation time and size of the aggregate as the two metrics to measure aggregation performance. We performed real robot experiments with different population sizes and evaluated the performance of aggregation using the defined metrics. We also proposed a probabilistic aggregation model and predicted the aggregation performance accurately in most of the settings. The results of the experiments show that environmental conditions affect the aggregation performance considerably and have to be studied in depth.}
    }
  • G. Broughton, T. Krajnik, M. Fernandez-carmona, G. Cielniak, and N. Bellotto, “RFID-based Object Localisation with a Mobile Robot to Assist the Elderly with Mild Cognitive Impairments,” in International Workshop on Intelligent Environments Supporting Healthcare and Well-being (WISHWell), 2016.
    [BibTeX] [Abstract] [EPrints]

    Mild Cognitive Impairments (MCI) disrupt the quality of life and reduce the independence of many elderly at home. People with MCI can increasingly become forgetful, hence solutions to help them ?finding lost objects are useful. This paper presents a framework for mobile robots to localise objects in a domestic environment using Radio Frequency Identification (RFID) technology. In particular, it describes the development of a new library for interacting with RFID readers, readily available for the Robot Operating System (ROS), and introduces some methods for its application to RFID-based object localisation with a single antenna. The framework adopts occupancy grids to create a probabilistic representations of tags location in the environment. A robot traversing the environment can then make use of this framework to keep an internal record of where objects were last spotted, and where they are most likely to be at any given point in time. Some preliminary results are presented, together with directions for future research.

    @inproceedings{lirolem23298,
           booktitle = {International Workshop on Intelligent Environments Supporting Healthcare and Well-being (WISHWell)},
               month = {September},
               title = {RFID-based Object Localisation with a Mobile Robot to Assist the Elderly with Mild Cognitive Impairments},
              author = {George Broughton and Tomas Krajnik and Manuel Fernandez-carmona and Grzegorz Cielniak and Nicola Bellotto},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166770)},
                 url = {http://eprints.lincoln.ac.uk/23298/},
            abstract = {Mild Cognitive Impairments (MCI) disrupt the quality of life and reduce the independence of many elderly at home. People with MCI can increasingly become forgetful, hence solutions to help them ?finding lost objects are useful. This paper presents a framework for mobile robots to localise objects in a domestic environment using Radio Frequency Identification (RFID) technology. In particular, it describes the development of a new library for interacting with RFID readers, readily available for the Robot Operating System (ROS), and introduces some methods for its application to RFID-based object localisation with a single antenna. The framework adopts occupancy grids to create a probabilistic representations of tags location in the environment. A robot traversing the environment can then make use of this framework to keep an internal record of where objects were last spotted, and where they are most likely to be at any given point in time. Some preliminary results are presented, together with directions for future research.}
    }
  • W. Chen, C. Xiong, and S. Yue, “On configuration trajectory formation in spatiotemporal profile for reproducing human hand reaching movement,” IEEE Transactions on Cybernetics, vol. 46, iss. 3, 2016.
    [BibTeX] [Abstract] [EPrints]

    Most functional reaching activities in daily living generally require a hand to reach the functional position in appropriate orientation with invariant spatiotemporal profile. Effectively reproducing such spatiotemporal feature of hand configuration trajectory in real time is essential to understand the human motor control and plan human-like motion on anthropomorphic robotic arm. However, there are no novel computational models in literature toward reproducing hand configuration-to-configuration movement in spatiotemporal profile. In response to the problem, this paper presents a computational framework for hand configuration trajectory formation based on hierarchical principle of human motor control. The composite potential field is constructed on special Euclidean Group to induce time-varying configuration toward target. The dynamic behavior of hand is described by a second-order kinematic model to produce the external representation of high-level motor control. The multivariate regression relation between intrinsic and extrinsic coordinates of arm, is statistically analyzed for determining the arm orientation in real time, which produces the external representation of low-level motor control. The proposed method is demonstrated in an anthropomorphic arm by performing several highly curved self-reaching movements. The generated configuration trajectories are compared with actual human movement in spatiotemporal profile to validate the proposed method.

    @article{lirolem17880,
              volume = {46},
              number = {3},
               month = {March},
              author = {Wenbin Chen and Caihua Xiong and Shigang Yue},
               title = {On configuration trajectory formation in spatiotemporal profile for reproducing human hand reaching movement},
           publisher = {IEEE},
             journal = {IEEE Transactions on Cybernetics},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166ad0)},
                 url = {http://eprints.lincoln.ac.uk/17880/},
            abstract = {Most functional reaching activities in daily living generally require a hand to reach the functional position in appropriate orientation with invariant spatiotemporal profile. Effectively reproducing such spatiotemporal feature of hand configuration trajectory in real time is essential to understand the human motor control and plan human-like motion on anthropomorphic robotic arm. However, there are no novel computational models in literature toward reproducing hand configuration-to-configuration movement in spatiotemporal profile. In response to the problem, this paper presents a computational framework for hand configuration trajectory formation based on hierarchical principle of human motor control. The composite potential field is constructed on special Euclidean Group to induce time-varying configuration toward target. The dynamic behavior of hand is described by a second-order kinematic model to produce the external representation of high-level motor control. The multivariate regression relation between intrinsic and extrinsic coordinates of arm, is statistically analyzed for determining the arm orientation in real time, which produces the external representation of low-level motor control. The proposed method is demonstrated in an anthropomorphic arm by performing several highly curved self-reaching movements. The generated configuration trajectories are compared with actual human movement in spatiotemporal profile to validate the proposed method.}
    }
  • A. Coninx, P. Baxter, E. Oleari, S. Bellini, B. Bierman, O. B. Henkemans, L. Canamero, P. Cosi, V. Enescu, R. R. Espinoza, A. Hiolle, R. Humbert, B. Kiefer, I. Kruijff-korbayova, R. Looije, M. Mosconi, M. Neerincx, G. Paci, G. Patsis, C. Pozzi, F. Sacchitelli, H. Sahli, A. Sanna, G. Sommavilla, F. Tesser, Y. Demiris, and T. Belpaeme, “Towards long-term social child-robot interaction: using multi-activity switching to engage young users,” Journal of Human-Robot Interaction, vol. 5, iss. 1, pp. 32-67, 2016.
    [BibTeX] [Abstract] [EPrints]

    Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.

    @article{lirolem23074,
              volume = {5},
              number = {1},
               title = {Towards long-term social child-robot interaction: using multi-activity switching to engage young users},
              author = {Alexandre Coninx and Paul Baxter and Elettra Oleari and Sara Bellini and Bert Bierman and Olivier Blanson Henkemans and Lola Canamero and Piero Cosi and Valentin Enescu and Raquel Ros Espinoza and Antoine Hiolle and Remi Humbert and Bernd Kiefer and Ivana Kruijff-korbayova and Rosemarijn Looije and Marco Mosconi and Mark Neerincx and Giulio Paci and Georgios Patsis and Clara Pozzi and Francesca Sacchitelli and Hichem Sahli and Alberto Sanna and Giacomo Sommavilla and Fabio Tesser and Yiannis Demiris and Tony Belpaeme},
                year = {2016},
               pages = {32--67},
             journal = {Journal of Human-Robot Interaction},
            keywords = {ARRAY(0x7fdc78166b30)},
                 url = {http://eprints.lincoln.ac.uk/23074/},
            abstract = {Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.}
    }
  • C. Coppola, T. Krajnik, T. Duckett, and N. Bellotto, “Learning temporal context for activity recognition,” in European Conference on Artificial Intelligence (ECAI), 2016.
    [BibTeX] [Abstract] [EPrints]

    We investigate how incremental learning of long-term human activity patterns improves the accuracy of activity classification over time. Rather than trying to improve the classification methods themselves, we assume that they can take into account prior probabilities of activities occurring at a particular time. We use the classification results to build temporal models that can provide these priors to the classifiers. As our system gradually learns about typical patterns of human activities, the accuracy of activity classification improves, which results in even more accurate priors. Two datasets collected over several months containing hand-annotated activity in residential and office environments were chosen to evaluate the approach. Several types of temporal models were evaluated for each of these datasets. The results indicate that incremental learning of daily routines leads to a significant improvement in activity classification.

    @inproceedings{lirolem23297,
           booktitle = {European Conference  on Artificial Intelligence (ECAI)},
               month = {August},
               title = {Learning temporal context for activity recognition},
              author = {Claudio Coppola and Tomas Krajnik and Tom Duckett and Nicola Bellotto},
                year = {2016},
            keywords = {ARRAY(0x7fdc781667d0)},
                 url = {http://eprints.lincoln.ac.uk/23297/},
            abstract = {We investigate how incremental learning of long-term human activity patterns improves the accuracy of activity classification over time. Rather than trying to improve the classification methods themselves, we assume that they can take into account prior probabilities of activities occurring at a particular time. We use the classification results to build temporal models that can provide these priors to the classifiers. As our system gradually learns about typical patterns of human activities, the accuracy of activity classification improves, which results in even more accurate priors. Two datasets collected over several months containing hand-annotated activity in residential and office environments were chosen to evaluate the approach. Several types of temporal models were evaluated for each of these datasets. The results indicate that incremental learning of daily routines leads to a significant improvement in activity classification.}
    }
  • C. Coppola, D. Faria, U. Nunes, and N. Bellotto, “Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016.
    [BibTeX] [Abstract] [EPrints]

    Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using a modified dynamic Bayesian mixture model designed to merge features with different semantics and also with proximity priors, the proposed framework can correctly recognize social activities in different situations, e.g. using data from one or two individuals.

    @inproceedings{lirolem23425,
           booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               month = {October},
               title = {Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data},
              author = {Claudio Coppola and Diego Faria and Urbano Nunes and Nicola Bellotto},
           publisher = {IEEE},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166620)},
                 url = {http://eprints.lincoln.ac.uk/23425/},
            abstract = {Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of classifiers; and (3) provide a public dataset with RGB-D data
    of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using a modified dynamic Bayesian mixture model designed to merge features with different semantics and also with proximity priors, the proposed framework can correctly recognize social activities in different situations, e.g. using data from one or two individuals.}
    }
  • H. Cuayahuitl, G. Couly, and C. Olalainty, “Training an interactive humanoid robot using multimodal deep reinforcement learning,” in NIPS Workshop on Deep Reinforcement Learning, 2016.
    [BibTeX] [Abstract] [EPrints]

    Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98\% of the games. A pilot test of the proposed multimodal system for the targeted game—integrating speech, vision and gestures—reports that reasonable and fluent interactions can be achieved using the proposed approach.

    @inproceedings{lirolem25937,
              volume = {abs/16},
               month = {December},
              author = {Heriberto Cuayahuitl and Guillaume Couly and Clement Olalainty},
           booktitle = {NIPS Workshop on Deep Reinforcement Learning},
               title = {Training an interactive humanoid robot using multimodal deep reinforcement learning},
           publisher = {arXiv},
             journal = {CoRR},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166590)},
                 url = {http://eprints.lincoln.ac.uk/25937/},
            abstract = {Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98\% of the games. A pilot test of the proposed multimodal system for the targeted game---integrating speech, vision and gestures---reports that reasonable and fluent interactions can be achieved using the proposed approach.}
    }
  • H. Cuayahuitl, S. Yu, A. Williamson, and J. Carse, “Deep reinforcement learning for multi-domain dialogue systems,” in NIPS Workshop on Deep Reinforcement Learning, 2016.
    [BibTeX] [Abstract] [EPrints]

    Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems. We propose a method for multi-domain dialogue policy learning—termed NDQN, and apply it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that our proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems.

    @inproceedings{lirolem25935,
              volume = {abs/16},
               month = {December},
              author = {Heriberto Cuayahuitl and Seunghak Yu and Ashley Williamson and Jacob Carse},
           booktitle = {NIPS Workshop on Deep Reinforcement Learning},
               title = {Deep reinforcement learning for multi-domain dialogue systems},
           publisher = {arXiv},
             journal = {CoRR},
                year = {2016},
            keywords = {ARRAY(0x7fdc781665c0)},
                 url = {http://eprints.lincoln.ac.uk/25935/},
            abstract = {Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems. We propose a method for multi-domain dialogue policy learning---termed NDQN, and apply it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that our proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems.}
    }
  • N. Dethlefs, H. Hastie, H. Cuayahuitl, Y. Yu, V. Rieser, and O. Lemon, “Information density and overlap in spoken dialogue,” Computer Speech & Language, vol. 37, pp. 82-97, 2016.
    [BibTeX] [Abstract] [EPrints]

    Incremental dialogue systems are often perceived as more responsive and natural because they are able to address phenomena of turn-taking and overlapping speech, such as backchannels or barge-ins. Previous work in this area has often identified distinctive prosodic features, or features relating to syntactic or semantic completeness, as marking appropriate places of turn-taking. In a separate strand of work, psycholinguistic studies have established a connection between information density and prominence in language–the less expected a linguistic unit is in a particular context, the more likely it is to be linguistically marked. This has been observed across linguistic levels, including the prosodic, which plays an important role in predicting overlapping speech. In this article, we explore the hypothesis that information density (ID) also plays a role in turn-taking. Specifically, we aim to show that humans are sensitive to the peaks and troughs of information density in speech, and that overlapping speech at ID troughs is perceived as more acceptable than overlaps at ID peaks. To test our hypothesis, we collect human ratings for three models of generating overlapping speech based on features of: (1) prosody and semantic or syntactic completeness, (2) information density, and (3) both types of information. Results show that over 50\% of users preferred the version using both types of features, followed by a preference for information density features alone. This indicates a clear human sensitivity to the effects of information density in spoken language and provides a strong motivation to adopt this metric for the design, development and evaluation of turn-taking modules in spoken and incremental dialogue systems.

    @article{lirolem22216,
              volume = {37},
               month = {May},
              author = {Nina Dethlefs and Helen Hastie and Heriberto Cuayahuitl and Yanchao Yu and Verena Rieser and Oliver Lemon},
               title = {Information density and overlap in spoken dialogue},
           publisher = {Elsevier for International Speech Communication Association (ISCA)},
             journal = {Computer Speech \& Language},
               pages = {82--97},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166a10)},
                 url = {http://eprints.lincoln.ac.uk/22216/},
            abstract = {Incremental dialogue systems are often perceived as more responsive and natural because they are able to address phenomena of turn-taking and overlapping speech, such as backchannels or barge-ins. Previous work in this area has often identified distinctive prosodic features, or features relating to syntactic or semantic completeness, as marking appropriate places of turn-taking. In a separate strand of work, psycholinguistic studies have established a connection between information density and prominence in language{--}the less expected a linguistic unit is in a particular context, the more likely it is to be linguistically marked. This has been observed across linguistic levels, including the prosodic, which plays an important role in predicting overlapping speech.
    
    In this article, we explore the hypothesis that information density (ID) also plays a role in turn-taking. Specifically, we aim to show that humans are sensitive to the peaks and troughs of information density in speech, and that overlapping speech at ID troughs is perceived as more acceptable than overlaps at ID peaks. To test our hypothesis, we collect human ratings for three models of generating overlapping speech based on features of: (1) prosody and semantic or syntactic completeness, (2) information density, and (3) both types of information. Results show that over 50\% of users preferred the version using both types of features, followed by a preference for information density features alone. This indicates a clear human sensitivity to the effects of information density in spoken language and provides a strong motivation to adopt this metric for the design, development and evaluation of turn-taking modules in spoken and incremental dialogue systems.}
    }
  • P. Dickinson, O. Szymanezyk, G. Cielniak, and M. Mannion, “Indoor positioning of shoppers using a network of bluetooth low energy beacons,” in 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 4-7 October 2016, Alcalá de Henares, Spain, 2016.
    [BibTeX] [Abstract] [EPrints]

    In this paper we present our work on the indoor positioning of users (shoppers), using a network of Bluetooth Low Energy (BLE) beacons deployed in a large wholesale shopping store. Our objective is to accurately determine which product sections a user is adjacent to while traversing the store, using RSSI readings from multiple beacons, measured asynchronously on a standard commercial mobile device. We further wish to leverage the store layout (which imposes natural constraints on the movement of users) and the physical configuration of the beacon network, to produce a robust and efficient solution. We start by describing our application context and hardware configuration, and proceed to introduce our node-graph model of user location. We then describe our experimental work which begins with an investigation of signal characteristics along and across aisles. We propose three methods of localization, using a ?nearest-beacon? approach as a base-line; exponentially averaged weighted range estimates; and a particle-filter method based on the RSSI attenuation model and Gaussian-noise. Our results demonstrate that the particle filter method significantly out-performs the others. Scalability also makes this method ideal for applications run on mobile devices with more limited computational capabilities

    @inproceedings{lirolem24589,
           booktitle = {2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 4-7 October 2016, Alcal{\'a} de Henares, Spain},
               month = {October},
               title = {Indoor positioning of shoppers using a network of bluetooth low energy beacons},
              author = {Patrick Dickinson and Olivier Szymanezyk and Grzegorz Cielniak and Mike Mannion},
           publisher = {IEEE Xplore},
                year = {2016},
            keywords = {ARRAY(0x7fdc781666e0)},
                 url = {http://eprints.lincoln.ac.uk/24589/},
            abstract = {In this paper we present our work on the indoor positioning of users (shoppers), using a network of Bluetooth Low Energy (BLE) beacons deployed in a large wholesale shopping store. Our objective is to accurately determine which product sections a user is adjacent to while traversing the store, using RSSI readings from multiple beacons, measured asynchronously on a standard commercial mobile device. We further wish to leverage the store layout (which imposes natural constraints on the movement of users) and the physical configuration of the beacon network, to produce a robust and efficient solution. We start by describing our application context and hardware configuration, and proceed to introduce our node-graph model of user location. We then describe our experimental work which begins with an investigation of signal characteristics along and across aisles. We propose three methods of localization, using a ?nearest-beacon? approach as a base-line; exponentially averaged weighted range estimates; and a particle-filter method based on the RSSI attenuation model and Gaussian-noise. Our results demonstrate that the particle filter method significantly out-performs the others. Scalability also makes this method ideal for applications run on mobile devices with more limited computational capabilities}
    }
  • J. P. Fentanes, T. Krajnik, M. Hanheide, and T. Duckett, “Persistent localization and life-long mapping in changing environments using the frequency map enhancement,” in IEEE/RSJ International Conference on Intelligent Robots ans Systems (IROS), 2016.
    [BibTeX] [Abstract] [EPrints]

    We present a lifelong mapping and localisation system for long-term autonomous operation of mobile robots in changing environments. The core of the system is a spatio-temporal occupancy grid that explicitly represents the persistence and periodicity of the individual cells and can predict the probability of their occupancy in the future. During navigation, our robot builds temporally local maps and integrates then into the global spatio-temporal grid. Through re-observation of the same locations, the spatio-temporal grid learns the long-term environment dynamics and gains the ability to predict the future environment states. This predictive ability allows to generate time-specific 2d maps used by the robot’s localisation and planning modules. By analysing data from a long-term deployment of the robot in a human-populated environment, we show that the proposed representation improves localisation accuracy and the efficiency of path planning. We also show how to integrate the method into the ROS navigation stack for use by other roboticists.

    @inproceedings{lirolem24088,
           booktitle = {IEEE/RSJ International Conference on Intelligent Robots ans Systems (IROS)},
               month = {October},
               title = {Persistent localization and life-long mapping in changing environments using the frequency map enhancement},
              author = {Jaime Pulido Fentanes and Tomas Krajnik and Marc Hanheide and Tom Duckett},
           publisher = {IEEE},
                year = {2016},
            keywords = {ARRAY(0x7fdc781666b0)},
                 url = {http://eprints.lincoln.ac.uk/24088/},
            abstract = {We present a lifelong mapping and localisation system for long-term autonomous operation of mobile robots in changing environments.
    The core of the system is a spatio-temporal occupancy grid that explicitly represents the persistence and periodicity of the individual cells and can predict the probability of their occupancy in the future.
    During navigation, our robot builds temporally local maps and integrates then into the global spatio-temporal grid. Through re-observation of the same locations, the spatio-temporal grid learns the long-term environment dynamics and gains the ability to predict the future environment states. This predictive ability allows to generate time-specific 2d maps  used by the robot's localisation and planning modules. By analysing data from a long-term deployment of the robot in a human-populated environment, we show that the proposed  representation improves localisation accuracy and the efficiency of path planning. We also show how to integrate the method into the ROS navigation stack for use by other roboticists.}
    }
  • M. Fernandez-Carmona and N. Bellotto, “On-line inference comparison with Markov Logic Network engines for activity recognition in AAL environments,” in IEEE International Conference on Intelligent Environments, 2016.
    [BibTeX] [Abstract] [EPrints]

    We address possible solutions for a practical application of Markov Logic Networks to online activity recognition, based on domotic sensors, to be used for monitoring elderly with mild cognitive impairments. Our system has to provide responsive information about user activities throughout the day, so different inference engines are tested. We use an abstraction layer to gather information from commercial domotic sensors. Sensor events are stored using a non-relational database. Using this database, evidences are built to query a logic network about current activities. Markov Logic Networks are able to deal with uncertainty while keeping a structured knowledge. This makes them a suitable tool for ambient sensors based inference. However, in their previous application, inferences are usually made offline. Time is a relevant constrain in our system and hence logic networks are designed here accordingly. We compare in this work different engines to model a Markov Logic Network suitable for such circumstances. Results show some insights about how to design a low latency logic network and which kind of solutions should be avoided.

    @inproceedings{lirolem23189,
           booktitle = {IEEE International Conference on Intelligent Environments},
               month = {September},
               title = {On-line inference comparison with Markov Logic Network engines for activity recognition in AAL environments},
              author = {Manuel Fernandez-Carmona and Nicola Bellotto},
           publisher = {IEEE},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166740)},
                 url = {http://eprints.lincoln.ac.uk/23189/},
            abstract = {We address possible solutions for a practical application of Markov Logic Networks to online activity recognition, based on domotic sensors, to be used for monitoring elderly with mild cognitive impairments. Our system has to provide responsive information about user activities throughout the day, so different inference engines are tested. We use an abstraction layer to gather information from commercial domotic sensors.  Sensor events are stored using a non-relational database. Using this database, evidences are built to query a logic network about current activities. Markov Logic Networks are able to deal with uncertainty while keeping a structured knowledge. This makes them a suitable tool for ambient sensors based inference. However, in their previous application, inferences are usually made offline. Time is a relevant constrain in our system and hence logic networks are designed here accordingly. We compare in this work different engines to model a Markov Logic Network suitable for such circumstances. Results show some insights about how to design a low latency logic network and which kind of solutions should be avoided.}
    }
  • Q. Fu, S. Yue, and C. Hu, “Bio-inspired collision detector with enhanced selectivity for ground robotic vision system,” in 27th British Machine Vision Conference, 2016.
    [BibTeX] [Abstract] [EPrints]

    There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the ?rst-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are ?rst, enhancing the collision selectivity in a bio-inspired way, via constructing a computing ef?cient visual sensor, and realizing the revealed speci?c characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot.

    @inproceedings{lirolem24941,
           booktitle = {27th British Machine Vision Conference},
               month = {September},
               title = {Bio-inspired collision detector with enhanced selectivity for ground robotic vision system},
              author = {Qinbing Fu and Shigang Yue and Cheng Hu},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166710)},
                 url = {http://eprints.lincoln.ac.uk/24941/},
            abstract = {There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the ?rst-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are ?rst, enhancing the collision selectivity in a bio-inspired way, via constructing a computing ef?cient visual sensor, and realizing the revealed speci?c characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot.}
    }
  • Y. Gatsoulis, M. Alomari, C. Burbridge, C. Dondrup, P. Duckworth, P. Lightbody, M. Hanheide, N. Hawes, D. C. Hogg, and A. G. Cohn, “QSRlib: a software library for online acquisition of qualitative spatial relations from video,” in 29th International Workshop on Qualitative Reasoning (QR16), at IJCAI-16, 2016.
    [BibTeX] [Abstract] [EPrints]

    There is increasing interest in using Qualitative Spatial Relations as a formalism to abstract from noisy and large amounts of video data in order to form high level conceptualisations, e.g. of activities present in video. We present a library to support such work. It is compatible with the Robot Operating System (ROS) but can also be used stand alone. A number of QSRs are built in; others can be easily added.

    @inproceedings{lirolem24853,
           booktitle = {29th International Workshop on Qualitative Reasoning (QR16), at IJCAI-16},
               month = {July},
               title = {QSRlib: a software library for online acquisition of qualitative spatial relations from video},
              author = {Y. Gatsoulis and M. Alomari and C. Burbridge and C. Dondrup and P. Duckworth and P. Lightbody and M. Hanheide and N. Hawes and D. C. Hogg and A. G. Cohn},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166860)},
                 url = {http://eprints.lincoln.ac.uk/24853/},
            abstract = {There is increasing interest in using Qualitative Spatial
    Relations as a formalism to abstract from noisy and
    large amounts of video data in order to form high level
    conceptualisations, e.g. of activities present in video.
    We present a library to support such work. It is compatible
    with the Robot Operating System (ROS) but can
    also be used stand alone. A number of QSRs are built
    in; others can be easily added.}
    }
  • K. Gerling, D. Hebesberger, C. Dondrup, T. K$backslash$"ortner, and M. Hanheide, “Robot deployment in long-term care: a case study of a mobile robot in physical therapy,” Zeitschrift für Geriatrie und Gerontologie, vol. 49, iss. 4, pp. 288-297, 2016.
    [BibTeX] [Abstract] [EPrints]

    Background. Healthcare systems in industrialised countries are challenged to provide care for a growing number of older adults. Information technology holds the promise of facilitating this process by providing support for care staff, and improving wellbeing of older adults through a variety of support systems. Goal. Little is known about the challenges that arise from the deployment of technology in care settings; yet, the integration of technology into care is one of the core determinants of successful support. In this paper, we discuss challenges and opportunities associated with technology integration in care using the example of a mobile robot to support physical therapy among older adults with cognitive impairment in the European project STRANDS. Results and discussion. We report on technical challenges along with perspectives of physical therapists, and provide an overview of lessons learned which we hope will help inform the work of researchers and practitioners wishing to integrate robotic aids in the caregiving process.

    @article{lirolem22902,
              volume = {49},
              number = {4},
               month = {June},
              author = {Kathrin Gerling and Denise Hebesberger and Christian Dondrup and Tobias K{$\backslash$}"ortner and Marc Hanheide},
               title = {Robot deployment in long-term care: a case study of a mobile robot in physical therapy},
           publisher = {Springer for Bundesverband Geriatrie / Deutsche Gesellschaft f{\"u}r Gerontologie und Geriatrie},
                year = {2016},
             journal = {Zeitschrift f{\"u}r Geriatrie und Gerontologie},
               pages = {288--297},
            keywords = {ARRAY(0x7fdc78166920)},
                 url = {http://eprints.lincoln.ac.uk/22902/},
            abstract = {Background. Healthcare systems in industrialised countries are challenged to provide
    care for a growing number of older adults. Information technology holds the promise of
    facilitating this process by providing support for care staff, and improving wellbeing of
    older adults through a variety of support systems. Goal. Little is known about the
    challenges that arise from the deployment of technology in care settings; yet, the
    integration of technology into care is one of the core determinants of successful
    support. In this paper, we discuss challenges and opportunities associated with
    technology integration in care using the example of a mobile robot to support physical
    therapy among older adults with cognitive impairment in the European project
    STRANDS. Results and discussion. We report on technical challenges along with
    perspectives of physical therapists, and provide an overview of lessons learned which
    we hope will help inform the work of researchers and practitioners wishing to integrate
    robotic aids in the caregiving process.}
    }
  • E. Gyebi, M. Hanheide, and G. Cielniak, “The effectiveness of integrating educational robotic activities into higher education Computer Science curricula: a case study in a developing country,” in Edurobotics 2016, 2016.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a case study to investigate the effects of educational robotics on a formal undergraduate Computer Science education in a developing country. The key contributions of this paper include a longitudinal study design, spanning the whole duration of one taught course, and its focus on continually assessing the effectiveness and the impact of robotic-based exercises. The study assessed the students’ motivation, engagement and level of understanding in learning general computer programming. The survey results indicate that there are benefits which can be gained from such activities and educational robotics is a promising tool in developing engaging study curricula. We hope that our experience from this study together with the free materials and data available for download will be beneficial to other practitioners working with educational robotics in different parts of the world.

    @inproceedings{lirolem25579,
           booktitle = {Edurobotics 2016},
               month = {November},
               title = {The effectiveness of integrating educational robotic activities into higher education Computer Science curricula: a case study in a developing country},
              author = {Ernest Gyebi and Marc Hanheide and Grzegorz Cielniak},
           publisher = {Springer},
                year = {2016},
            keywords = {ARRAY(0x7fdc781665f0)},
                 url = {http://eprints.lincoln.ac.uk/25579/},
            abstract = {In this paper, we present a case study to investigate the effects of educational robotics on a formal undergraduate Computer Science education in a developing country. The key contributions of this paper include a longitudinal study design, spanning the whole duration of one taught course, and its focus on continually assessing the effectiveness and the impact of robotic-based exercises. The study assessed the  students' motivation, engagement and level of understanding in learning general computer programming. The survey results indicate that there are benefits which can be gained from such activities and educational robotics is a promising tool in developing engaging study curricula. We hope that our experience from this study together with the free materials and data available for download will be beneficial to other practitioners working with educational robotics in different parts of the world.}
    }
  • C. Hu, F. Arvin, C. Xiong, and S. Yue, “A bio-inspired embedded vision system for autonomous micro-robots: the LGMD case,” IEEE Transactions on Cognitive and Developmental Systems, vol. PP, iss. 99, pp. 1-14, 2016.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a new bio-inspired vision system embedded for micro-robots. The vision system takes inspiration from locusts in detecting fast approaching objects. Neurophysiological research suggested that locusts use a wide-field visual neuron called lobula giant movement detector (LGMD) to respond to imminent collisions. In this work, we present the implementation of the selected neuron model by a low-cost ARM processor as part of a composite vision module. As the first embedded LGMD vision module fits to a micro-robot, the developed system performs all image acquisition and processing independently. The vision module is placed on top of a microrobot to initiate obstacle avoidance behaviour autonomously. Both simulation and real-world experiments were carried out to test the reliability and robustness of the vision system. The results of the experiments with different scenarios demonstrated the potential of the bio-inspired vision system as a low-cost embedded module for autonomous robots.

    @article{lirolem25279,
              volume = {PP},
              number = {99},
               month = {May},
              author = {Cheng Hu and Farshad Arvin and Caihua Xiong and Shigang Yue},
               title = {A bio-inspired embedded vision system for autonomous micro-robots: the LGMD case},
           publisher = {IEEE},
                year = {2016},
             journal = {IEEE Transactions on Cognitive and Developmental Systems},
               pages = {1--14},
            keywords = {ARRAY(0x7fdc78166980)},
                 url = {http://eprints.lincoln.ac.uk/25279/},
            abstract = {In this paper, we present a new bio-inspired vision
    system embedded for micro-robots. The vision system takes inspiration
    from locusts in detecting fast approaching objects. Neurophysiological
    research suggested that locusts use a wide-field
    visual neuron called lobula giant movement detector (LGMD)
    to respond to imminent collisions. In this work, we present
    the implementation of the selected neuron model by a low-cost
    ARM processor as part of a composite vision module. As the
    first embedded LGMD vision module fits to a micro-robot, the
    developed system performs all image acquisition and processing
    independently. The vision module is placed on top of a microrobot
    to initiate obstacle avoidance behaviour autonomously. Both
    simulation and real-world experiments were carried out to test
    the reliability and robustness of the vision system. The results
    of the experiments with different scenarios demonstrated the
    potential of the bio-inspired vision system as a low-cost embedded
    module for autonomous robots.}
    }
  • J. Kennedy, P. Baxter, E. Senft, and T. Belpaeme, “Social robot tutoring for child second language learning,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction HRI 2016, Christchurch, New Zealand, 2016, pp. 231-238.
    [BibTeX] [Abstract] [EPrints]

    An increasing amount of research is being conducted to determine how a robot tutor should behave socially in educa- tional interactions with children. Both human-human and human- robot interaction literature predicts an increase in learning with increased social availability of a tutor, where social availability has verbal and nonverbal components. Prior work has shown that greater availability in the nonverbal behaviour of a robot tutor has a positive impact on child learning. This paper presents a study with 67 children to explore how social aspects of a tutor robot?s speech influences their perception of the robot and their language learning in an interaction. Children perceive the difference in social behaviour between ?low? and ?high? verbal availability conditions, and improve significantly between a pre- and a post-test in both conditions. A longer-term retention test taken the following week showed that the children had retained almost all of the information they had learnt. However, learning was not affected by which of the robot behaviours they had been exposed to. It is suggested that in this short-term interaction context, additional effort in developing social aspects of a robot?s verbal behaviour may not return the desired positive impact on learning gains.

    @inproceedings{lirolem24855,
               month = {March},
              author = {James Kennedy and Paul Baxter and Emmanuel Senft and Tony Belpaeme},
           booktitle = {Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction HRI 2016},
             address = {Christchurch, New Zealand},
               title = {Social robot tutoring for child second language learning},
           publisher = {ACM Press},
               pages = {231--238},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166aa0)},
                 url = {http://eprints.lincoln.ac.uk/24855/},
            abstract = {An increasing amount of research is being conducted
    to determine how a robot tutor should behave socially in educa- tional interactions with children. Both human-human and human- robot interaction literature predicts an increase in learning with increased social availability of a tutor, where social availability has verbal and nonverbal components. Prior work has shown that greater availability in the nonverbal behaviour of a robot tutor has a positive impact on child learning. This paper presents a study with 67 children to explore how social aspects of a tutor robot?s speech influences their perception of the robot and their language learning in an interaction. Children perceive the difference in social behaviour between ?low? and ?high? verbal availability conditions, and improve significantly between a pre- and a post-test in both conditions. A longer-term retention test taken the following week showed that the children had retained almost all of the information they had learnt. However, learning was not affected by which of the robot behaviours they had been exposed to. It is suggested that in this short-term interaction context, additional effort in developing social aspects of a robot?s verbal behaviour may not return the desired positive impact on learning gains.}
    }
  • T. Krajnik, J. P. Fentanes, J. Santos, and T. Duckett, “Frequency map enhancement: introducing dynamics into static environment models,” in ICRA Workshop AI for Long-Term Autonomy, 2016.
    [BibTeX] [Abstract] [EPrints]

    We present applications of the Frequency Map Enhancement (FreMEn), which improves the performance of mobile robots in long-term scenarios by introducing the notion of dynamics into their (originally static) environment models. Rather than using a fixed probability value, the method models the uncertainty of the elementary environment states by their frequency spectra. This allows to integrate sparse and irregular observations obtained during long-term deployments of mobile robots into memory-efficient spatio-temporal models that reflect mid- and long-term pseudo-periodic environment variations. The frequency-enhanced spatio-temporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments. In a series of experiments performed over periods of weeks to years, we demonstrate that the proposed approach improves mobile robot localization, path and task planning, activity recognition and allows for life-long spatio-temporal exploration.

    @inproceedings{lirolem23261,
           booktitle = {ICRA Workshop AI for Long-Term Autonomy},
               month = {May},
               title = {Frequency map enhancement: introducing dynamics into static environment models},
              author = {Tomas Krajnik and Jaime Pulido Fentanes and Joao Santos and Tom Duckett},
                year = {2016},
            keywords = {ARRAY(0x7fdc781669b0)},
                 url = {http://eprints.lincoln.ac.uk/23261/},
            abstract = {We present applications of the Frequency Map Enhancement (FreMEn), which improves the performance of mobile robots in long-term scenarios by introducing the notion of dynamics into their (originally static) environment models. Rather than using a fixed probability value, the method models the uncertainty of the elementary environment states by their frequency spectra. This allows to integrate sparse and irregular observations obtained during long-term deployments of mobile robots into memory-efficient spatio-temporal models that reflect mid- and long-term pseudo-periodic environment variations. The frequency-enhanced spatio-temporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments.   In a series of experiments performed over periods of weeks to years, we demonstrate that the proposed approach improves mobile robot localization, path and task planning, activity recognition and allows for life-long spatio-temporal exploration.}
    }
  • M. Kulich, T. Krajnik, L. Preucil, and T. Duckett, “To explore or to exploit? Learning humans’ behaviour to maximize interactions with them,” in International Workshop on Modelling and Simulation for Autonomous Systems, 2016, pp. 48-63.
    [BibTeX] [Abstract] [EPrints]

    Assume a robot operating in a public space (e.g., a library, a museum) and serving visitors as a companion, a guide or an information stand. To do that, the robot has to interact with humans, which presumes that it actively searches for humans in order to interact with them. This paper addresses the problem how to plan robot’s actions in order to maximize the number of such interactions in the case human behavior is not known in advance. We formulate this problem as the exploration/exploitation problem and design several strategies for the robot. The main contribution of the paper than lies in evaluation and comparison of the designed strategies on two datasets. The evaluation shows interesting properties of the strategies, which are discussed.

    @inproceedings{lirolem26195,
           booktitle = {International Workshop on Modelling and Simulation for Autonomous Systems},
               month = {June},
               title = {To explore or to exploit? Learning humans' behaviour to maximize interactions with them},
              author = {Miroslav Kulich and Tomas Krajnik and Libor Preucil and Tom Duckett},
           publisher = {Springer},
                year = {2016},
               pages = {48--63},
            keywords = {ARRAY(0x7fdc781668c0)},
                 url = {http://eprints.lincoln.ac.uk/26195/},
            abstract = {Assume a robot operating in a public space (e.g., a library, a museum) and serving visitors as a companion, a guide or an information stand. To do  that, the robot has to interact with humans, which presumes that it actively searches for humans in order to interact with them. This paper addresses the problem how to plan robot's actions in order to maximize the number of such interactions in the case human behavior is not known in advance. We formulate this problem as the exploration/exploitation problem and design several strategies for the robot. The main contribution of the paper than lies in evaluation and comparison of the designed strategies on two datasets. The evaluation shows interesting properties of the strategies, which are discussed.}
    }
  • K. Kusumam, T. Krajnik, S. Pearson, G. Cielniak, and T. Duckett, “Can you pick a broccoli? 3D-vision based detection and localisation of broccoli heads in the field,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2\% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.

    @inproceedings{lirolem24087,
           booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               month = {October},
               title = {Can you pick a broccoli? 3D-vision based detection and localisation of broccoli heads in the field},
              author = {Keerthy Kusumam and Tomas Krajnik and Simon Pearson and Grzegorz Cielniak and Tom Duckett},
           publisher = {IEEE},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166650)},
                 url = {http://eprints.lincoln.ac.uk/24087/},
            abstract = {This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2\% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.}
    }
  • F. Lier, M. Hanheide, L. Natale, S. Schulz, J. Weisz, S. Wachsmuth, and S. Wrede, “Towards automated system and experiment reproduction in robotics,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016.
    [BibTeX] [Abstract] [EPrints]

    Even though research on autonomous robots and human-robot interaction accomplished great progress in recent years, and reusable soft- and hardware components are available, many of the reported findings are only hardly reproducible by fellow scientists. Usually, reproducibility is impeded because required information, such as the specification of software versions and their configuration, required data sets, and experiment protocols are not mentioned or referenced in most publications. In order to address these issues, we recently introduced an integrated tool chain and its underlying development process to facilitate reproducibility in robotics. In this contribution we instantiate the complete tool chain in a unique user study in order to assess its applicability and usability. To this end, we chose three different robotic systems from independent institutions and modeled them in our tool chain, including three exemplary experiments. Subsequently, we asked twelve researchers to reproduce one of the formerly unknown systems and the associated experiment. We show that all twelve scientists were able to replicate a formerly unknown robotics experiment using our tool chain.

    @inproceedings{lirolem24852,
           booktitle = {2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               month = {October},
               title = {Towards automated system and experiment reproduction in robotics},
              author = {Florian Lier and Marc Hanheide and Lorenzo Natale and Simon Schulz and Jonathan Weisz and Sven Wachsmuth and Sebastian Wrede},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166680)},
                 url = {http://eprints.lincoln.ac.uk/24852/},
            abstract = {Even though research on autonomous robots and
    human-robot interaction accomplished great progress in recent
    years, and reusable soft- and hardware components are
    available, many of the reported findings are only hardly
    reproducible by fellow scientists. Usually, reproducibility is
    impeded because required information, such as the specification
    of software versions and their configuration, required data sets,
    and experiment protocols are not mentioned or referenced
    in most publications. In order to address these issues, we
    recently introduced an integrated tool chain and its underlying
    development process to facilitate reproducibility in robotics.
    In this contribution we instantiate the complete tool chain in
    a unique user study in order to assess its applicability and
    usability. To this end, we chose three different robotic systems
    from independent institutions and modeled them in our tool
    chain, including three exemplary experiments. Subsequently,
    we asked twelve researchers to reproduce one of the formerly
    unknown systems and the associated experiment. We show that
    all twelve scientists were able to replicate a formerly unknown
    robotics experiment using our tool chain.}
    }
  • O. M. Mozos, V. Sandulescu, S. Andrews, D. Ellis, N. Bellotto, R. Dobrescu, and J. M. Ferrandez, “Stress detection using wearable physiological and sociometric sensors,” International Journal of Neural Systems, p. 1650041, 2016.
    [BibTeX] [Abstract] [EPrints]

    Stress remains a significant social problem for individuals in modern societies. This paper presents a machine learning approach for the automatic detection of stress of people in a social situation by combining two sensor systems that capture physiological and social responses. We compare the performance using different classifiers including support vector machine, AdaBoost, and k-nearest neighbour. Our experimental results show that by combining the measurements from both sensor systems, we could accurately discriminate between stressful and neutral situations during a controlled Trier social stress test (TSST). Moreover, this paper assesses the discriminative ability of each sensor modality individually and considers their suitability for real time stress detection. Finally, we present an study of the most discriminative features for stress detection.

    @article{lirolem23128,
               month = {December},
               title = {Stress detection using wearable physiological and sociometric sensors},
              author = {Oscar Martinez Mozos and Virginia Sandulescu and Sally Andrews and David Ellis and Nicola Bellotto and Radu Dobrescu and Jose Manuel Ferrandez},
           publisher = {World Scientific Publishing},
                year = {2016},
               pages = {1650041},
             journal = {International Journal of Neural Systems},
            keywords = {ARRAY(0x7fdc78166560)},
                 url = {http://eprints.lincoln.ac.uk/23128/},
            abstract = {Stress remains a significant social problem for individuals in modern societies. This paper presents a machine learning approach for the automatic detection of stress of people in a social situation by combining two sensor systems that capture physiological and social responses. We compare the performance using different classifiers including support vector machine, AdaBoost, and k-nearest neighbour. Our experimental results show that by combining the measurements from both sensor systems, we could accurately discriminate between stressful and neutral situations during a controlled Trier social stress test (TSST). Moreover, this paper assesses the discriminative ability of each sensor modality individually and considers their suitability for real time stress detection. Finally, we present an study of the most discriminative features for stress detection.}
    }
  • F. Riccio, R. Capobianco, M. Hanheide, and D. Nardi, “Stam: a framework for spatio-temporal affordance maps,” in International Workshop on Modelling and Simulation for Autonomous Systems, 2016, pp. 271-280.
    [BibTeX] [Abstract] [EPrints]

    A?ordances have been introduced in literature as action op- portunities that objects o?er, and used in robotics to semantically rep- resent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal A?ordances (STA) and Spatio-Temporal A?ordance Map (STAM). Using this formalism, we encode action semantics re- lated to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that a?ordances encode accurate semantics of the environment.

    @inproceedings{lirolem24851,
           booktitle = {International Workshop on Modelling and Simulation for Autonomous Systems},
               month = {June},
               title = {Stam: a framework for spatio-temporal affordance maps},
              author = {Francesco Riccio and Roberto Capobianco and Marc Hanheide and Daniele Nardi},
           publisher = {Springer},
                year = {2016},
               pages = {271--280},
            keywords = {ARRAY(0x7fdc781668f0)},
                 url = {http://eprints.lincoln.ac.uk/24851/},
            abstract = {A?ordances have been introduced in literature as action op-
    portunities that objects o?er, and used in robotics to semantically rep-
    resent their interconnection. However, when considering an environment
    instead of an object, the problem becomes more complex due to the
    dynamism of its state. To tackle this issue, we introduce the concept
    of Spatio-Temporal A?ordances (STA) and Spatio-Temporal A?ordance
    Map (STAM). Using this formalism, we encode action semantics re-
    lated to the environment to improve task execution capabilities of an
    autonomous robot. We experimentally validate our approach to support
    the execution of robot tasks by showing that a?ordances encode accurate
    semantics of the environment.}
    }
  • C. Salatino, V. Gower, M. Ghrissi, A. Tapus, K. Wieczorowska-Tobis, A. Suwalska, P. Barattini, R. Rosso, G. Munaro, N. Bellotto, and H. van den Heuvel, “EnrichMe: a robotic solution for independence and active aging of elderly people with MCI,” in 15th International Conference on Computers Helping People with Special Needs (ICCHP 2016), 2016.
    [BibTeX] [Abstract] [EPrints]

    Mild cognitive impairment (MCI) is a state related to ageing, and sometimes evolves to dementia. As there is no pharmacological treatment for MCI, a non-pharmacological approach is very important. The use of Information and Communication Technologies (ICT) in care and assistance services for elderly people increases their chances of prolonging independence thanks to better cognitive efficiency. Robots are seen to have the potential to support the care and independence of elderly people. The project ENRICHME (funded by the EU H2020 Programme) focuses on developing and testing technologies for supporting elderly people with MCI in their living environment for a long time. This paper describes the results of the activities conducted during the first year of the ENRICHME project, in particular the definition of user needs and requirements and the resulting system architecture.

    @inproceedings{lirolem22704,
           booktitle = {15th International Conference on Computers Helping People with Special Needs (ICCHP 2016)},
               month = {July},
               title = {EnrichMe: a robotic solution for independence and active aging of elderly people with MCI},
              author = {Claudia Salatino and Valerio Gower and Meftah Ghrissi and Adriana Tapus and K Wieczorowska-Tobis and A Suwalska and Paolo Barattini and Roberto Rosso and Giulia Munaro and Nicola Bellotto and Herjan van den Heuvel},
                year = {2016},
            keywords = {ARRAY(0x7fdc78166830)},
                 url = {http://eprints.lincoln.ac.uk/22704/},
            abstract = {Mild cognitive impairment (MCI) is a state related to ageing, and sometimes evolves to dementia. As there is no pharmacological treatment for MCI, a non-pharmacological approach is very important. The use of Information and Communication Technologies (ICT) in care and assistance services for elderly people increases their chances of prolonging independence thanks to better cognitive efficiency. Robots are seen to have the potential to support the care and independence of elderly people. The project ENRICHME (funded by the EU H2020 Programme) focuses on developing and testing technologies for supporting elderly people with MCI in their living environment for a long time. This paper describes the results of the activities conducted during the first year of the ENRICHME project, in particular the definition of user needs and requirements and the resulting system architecture.}
    }
  • J. Santos, T. Krajnik, J. P. Fentanes, and T. Duckett, “A 3D simulation environment with real dynamics: a tool for benchmarking mobile robot performance in long-term deployments,” in ICRA 2016 Workshop: AI for Long-term Autonomy, 2016.
    [BibTeX] [Abstract] [EPrints]

    This paper describes a method to compare and evaluate mobile robot algorithms for long-term deployment in changing environments. Typically, the long-term performance of state estimation algorithms for mobile robots is evaluated using pre-recorded sensory datasets. However such datasets are not suitable for evaluating decision-making and control algorithms where the behaviour of the robot will be different in every trial. Simulation allows to overcome this issue and while it ensures repeatability of experiments, the development of 3D simulations for an extended period of time is a costly exercise. In our approach long-term datasets comprising high-level tracks of dynamic entities such as people and furniture are recorded by ambient sensors placed in a real environment. The high-level tracks are then used to parameterise a 3D simulation containing its own geometric models of the dynamic entities and the background scene. This simulation, which is based on actual human activities, can then be used to benchmark and validate algorithms for long-term operation of mobile robots.

    @inproceedings{lirolem23220,
           booktitle = {ICRA 2016 Workshop: AI for Long-term Autonomy},
               month = {May},
               title = {A 3D simulation environment with real dynamics: a tool for benchmarking mobile robot performance in long-term deployments},
              author = {Joao Santos and Tomas Krajnik and Jaime Pulido Fentanes and Tom Duckett},
                year = {2016},
            keywords = {ARRAY(0x7fdc781669e0)},
                 url = {http://eprints.lincoln.ac.uk/23220/},
            abstract = {This paper describes a method to compare and evaluate mobile robot algorithms for long-term deployment in changing   environments. Typically, the long-term performance of state estimation algorithms for mobile robots is evaluated using pre-recorded sensory datasets. However such datasets are not suitable for evaluating decision-making and control  algorithms where the behaviour of the robot will be different in every trial. Simulation allows to overcome this issue and while it ensures repeatability of experiments, the development of 3D simulations for an extended period of time is a costly exercise.
    In our approach long-term datasets comprising high-level tracks of dynamic entities such as people and furniture are recorded by ambient sensors placed in a real environment. The high-level tracks are then used to parameterise a 3D  simulation containing its own geometric models of the dynamic entities and the background scene. This simulation,  which is based on actual human activities, can then be used to benchmark and validate algorithms for long-term  operation of mobile robots.}
    }
  • J. M. Santos, T. Krajnik, J. P. Fentanes, and T. Duckett, “Lifelong information-driven exploration to complete and refine 4-D spatio-temporal maps,” IEEE Robotics and Automation Letters, vol. 1, iss. 2, pp. 684-691, 2016.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an exploration method that allows mobile robots to build and maintain spatio-temporal models of changing environments. The assumption of a perpetuallychanging world adds a temporal dimension to the exploration problem, making spatio-temporal exploration a never-ending, life-long learning process. We address the problem by application of information-theoretic exploration methods to spatio-temporal models that represent the uncertainty of environment states as probabilistic functions of time. This allows to predict the potential information gain to be obtained by observing a particular area at a given time, and consequently, to decide which locations to visit and the best times to go there. To validate the approach, a mobile robot was deployed continuously over 5 consecutive business days in a busy office environment. The results indicate that the robot?s ability to spot environmental changes im

    @article{lirolem22698,
              volume = {1},
              number = {2},
               month = {July},
              author = {Joao Machado Santos and Tomas Krajnik and Jaime Pulido Fentanes and Tom Duckett},
               title = {Lifelong information-driven exploration to complete and refine 4-D spatio-temporal maps},
           publisher = {IEEE},
                year = {2016},
             journal = {IEEE Robotics and Automation Letters},
               pages = {684--691},
            keywords = {ARRAY(0x7fdc78166890)},
                 url = {http://eprints.lincoln.ac.uk/22698/},
            abstract = {This paper presents an exploration method that allows
    mobile robots to build and maintain spatio-temporal models
    of changing environments. The assumption of a perpetuallychanging
    world adds a temporal dimension to the exploration
    problem, making spatio-temporal exploration a never-ending,
    life-long learning process. We address the problem by application
    of information-theoretic exploration methods to spatio-temporal
    models that represent the uncertainty of environment states as
    probabilistic functions of time. This allows to predict the potential
    information gain to be obtained by observing a particular area
    at a given time, and consequently, to decide which locations to
    visit and the best times to go there.
    To validate the approach, a mobile robot was deployed
    continuously over 5 consecutive business days in a busy office
    environment. The results indicate that the robot?s ability to spot
    environmental changes im}
    }
  • D. Skovcaj, A. Vrevcko, M. Mahnivc, M. Janívcek, G. M. Kruijff, M. Hanheide, N. Hawes, J. L. Wyatt, T. Keller, K. Zhou, M. Zillich, and M. Kristan, “An integrated system for interactive continuous learning of categorical knowledge,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, iss. 5, pp. 823-848, 2016.
    [BibTeX] [Abstract] [EPrints]

    This article presents an integrated robot system capable of interactive learning in dialogue with a human. Such a system needs to have several competencies and must be able to process different types of representations. In this article, we describe a collection of mechanisms that enable integration of heterogeneous competencies in a principled way. Central to our design is the creation of beliefs from visual and linguistic information, and the use of these beliefs for planning system behaviour to satisfy internal drives. The system is able to detect gaps in its knowledge and to plan and execute actions that provide information needed to fill these gaps. We propose a hierarchy of mechanisms which are capable of engaging in different kinds of learning interactions, e.g. those initiated by a tutor or by the system itself. We present the theory these mechanisms are build upon and an instantiation of this theory in the form of an integrated robot system. We demonstrate the operation of the system in the case of learning conceptual models of objects and their visual properties.

    @article{lirolem22203,
              volume = {28},
              number = {5},
               month = {August},
              author = {Danijel Sko{\vc}aj and Alen Vre{\vc}ko and Marko Mahni{\vc} and Miroslav Jan{\'i}{\vc}ek and Geert-Jan M Kruijff and Marc Hanheide and Nick Hawes and Jeremy L Wyatt and Thomas Keller and Kai Zhou and Michael Zillich and Matej Kristan},
               title = {An integrated system for interactive continuous learning of categorical knowledge},
           publisher = {Taylor \& Francis: STM, Behavioural Science and Public Health Titles},
                year = {2016},
             journal = {Journal of Experimental \& Theoretical Artificial Intelligence},
               pages = {823--848},
            keywords = {ARRAY(0x7fdc78166800)},
                 url = {http://eprints.lincoln.ac.uk/22203/},
            abstract = {This article presents an integrated robot system capable of interactive learning in dialogue with a human. Such a system needs to have several competencies and must be able to process different types of representations. In this article, we describe a collection of mechanisms that enable integration of heterogeneous competencies in a principled way. Central to our design is the creation of beliefs from visual and linguistic information, and the use of these beliefs for planning system behaviour to satisfy internal drives. The system is able to detect gaps in its knowledge and to plan and execute actions that provide information needed to fill these gaps. We propose a hierarchy of mechanisms which are capable of engaging in different kinds of learning interactions, e.g. those initiated by a tutor or by the system itself. We present the theory these mechanisms are build upon and an instantiation of this theory in the form of an integrated robot system. We demonstrate the operation of the system in the case of learning conceptual models of objects and their visual properties.}
    }
  • C. Xiong, W. Chen, B. Sun, M. Liu, S. Yue, and W. Chen, “Design and implementation of an anthropomorphic hand for replicating human grasping functions,” IEEE Transactions on Robotics, vol. 32, iss. 3, pp. 652-671, 2016.
    [BibTeX] [Abstract] [EPrints]

    How to design an anthropomorphic hand with a few actuators to replicate the grasping functions of the human hand is still a challenging problem. This paper aims to develop a general theory for designing the anthropomorphic hand and endowing the designed hand with natural grasping functions. A grasping experimental paradigm was set up for analyzing the grasping mechanism of the human hand in daily living. The movement relationship among joints in a digit, among digits in the human hand, and the postural synergic characteristic of the fingers were studied during the grasping. The design principle of the anthropomorphic mechanical digit that can reproduce the digit grasping movement of the human hand was developed. The design theory of the kinematic transmission mechanism that can be embedded into the palm of the anthropomorphic hand to reproduce the postural synergic characteristic of the fingers by using a limited number of actuators is proposed. The design method of the anthropomorphic hand for replicating human grasping functions was formulated. Grasping experiments are given to verify the effectiveness of the proposed design method of the anthropomorphic hand. Â\copyright 2016 IEEE.

    @article{lirolem23735,
              volume = {32},
              number = {3},
               month = {June},
              author = {Cai-Hua Xiong and Wen-Rui Chen and Bai-Yang Sun and Ming-Jin Liu and Shigang Yue and Wen-Bin Chen},
               title = {Design and implementation of an anthropomorphic hand for replicating human grasping functions},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
                year = {2016},
             journal = {IEEE Transactions on Robotics},
               pages = {652--671},
            keywords = {ARRAY(0x7fdc78166950)},
                 url = {http://eprints.lincoln.ac.uk/23735/},
            abstract = {How to design an anthropomorphic hand with a few actuators to replicate the grasping functions of the human hand is still a challenging problem. This paper aims to develop a general theory for designing the anthropomorphic hand and endowing the designed hand with natural grasping functions. A grasping experimental paradigm was set up for analyzing the grasping mechanism of the human hand in daily living. The movement relationship among joints in a digit, among digits in the human hand, and the postural synergic characteristic of the fingers were studied during the grasping. The design principle of the anthropomorphic mechanical digit that can reproduce the digit grasping movement of the human hand was developed. The design theory of the kinematic transmission mechanism that can be embedded into the palm of the anthropomorphic hand to reproduce the postural synergic characteristic of the fingers by using a limited number of actuators is proposed. The design method of the anthropomorphic hand for replicating human grasping functions was formulated. Grasping experiments are given to verify the effectiveness of the proposed design method of the anthropomorphic hand. {\^A}{\copyright} 2016 IEEE.}
    }
  • X. Zheng, Z. Wang, F. Li, F. Zhao, S. Yue, C. Zhang, and Z. Wang, “A 14-bit 250 MS/s IF sampling pipelined ADC in 180 nm CMOS process,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 63, iss. 9, pp. 1381-1392, 2016.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a 14-bit 250 MS/s ADC fabricated in a 180 nm CMOS process, which aims at optimizing its linearity, operating speed, and power efficiency. The implemented ADC employs an improved SHA with parasitic optimized bootstrapped switches to achieve high sampling linearity over a wide input frequency range. It also explores a dedicated foreground calibration to correct the capacitor mismatches and the gain error of residue amplifier, where a novel configuration scheme with little cost for analog front-end is developed. Moreover, a partial non-overlapping clock scheme associated with a highspeed reference buffer and fast comparators is proposed to maximize the residue settling time. The implemented ADC is measured under different input frequencies with a sampling rate of 250 MS/s and it consumes 300 mW from a 1.8 V supply. For 30 MHz input, the measured SFDR and SNDR of the ADC is 94.7 dB and 68.5 dB, which can remain over 84.3 dB and 65.4 dB for up to 400 MHz. The measured DNL and INL after calibration are optimized to 0.15 LSB and 1.00 LSB, respectively, while the Walden FOM at Nyquist frequency is 0.57 pJ/step.

    @article{lirolem25371,
              volume = {63},
              number = {9},
               month = {September},
              author = {Xuqiang Zheng and Zhijun Wang and Fule Li and Feng Zhao and Shigang Yue and Chun Zhang and Zhihua Wang},
               title = {A 14-bit 250 MS/s IF sampling pipelined ADC in 180 nm CMOS process},
           publisher = {IEEE},
                year = {2016},
             journal = {IEEE Transactions on Circuits and Systems I: Regular Papers},
               pages = {1381--1392},
            keywords = {ARRAY(0x7fdc781667a0)},
                 url = {http://eprints.lincoln.ac.uk/25371/},
            abstract = {This paper presents a 14-bit 250 MS/s ADC fabricated
    in a 180 nm CMOS process, which aims at optimizing its
    linearity, operating speed, and power efficiency. The implemented
    ADC employs an improved SHA with parasitic optimized bootstrapped
    switches to achieve high sampling linearity over a wide
    input frequency range. It also explores a dedicated foreground
    calibration to correct the capacitor mismatches and the gain
    error of residue amplifier, where a novel configuration scheme
    with little cost for analog front-end is developed. Moreover, a
    partial non-overlapping clock scheme associated with a highspeed
    reference buffer and fast comparators is proposed to
    maximize the residue settling time. The implemented ADC is
    measured under different input frequencies with a sampling rate
    of 250 MS/s and it consumes 300 mW from a 1.8 V supply. For 30
    MHz input, the measured SFDR and SNDR of the ADC is 94.7
    dB and 68.5 dB, which can remain over 84.3 dB and 65.4 dB for
    up to 400 MHz. The measured DNL and INL after calibration
    are optimized to 0.15 LSB and 1.00 LSB, respectively, while the
    Walden FOM at Nyquist frequency is 0.57 pJ/step.}
    }

2015

  • S. Albrecht, A. M. S. da Barreto, D. Braziunas, D. Buckeridge, and H. Cuayahuitl, “Reports of the AAAI 2014 Conference Workshops,” AI Magazine, vol. 36, iss. 1, pp. 87-98, 2015.
    [BibTeX] [Abstract] [EPrints]

    The AAAI-14 Workshop program was held Sunday and Monday, July 27?28, 2012, at the Québec City Convention Centre in Québec, Canada. Canada. The AAAI-14 workshop program included fifteen workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Robotics; Artificial Intelligence Applied to Assistive Technologies and Smart Environments; Cognitive Computing for Augmented Human Intelligence; Computer Poker and Imperfect Information; Discovery Informatics; Incentives and Trust in Electronic Communities; Intelligent Cinematography and Editing; Machine Learning for Interactive Systems: Bridging the Gap between Perception, Action and Communication; Modern Artificial Intelligence for Health Analytics; Multiagent Interaction without Prior Coordination; Multidisciplinary Workshop on Advances in Preference Handling; Semantic Cities — Beyond Open Data to Models, Standards and Reasoning; Sequential Decision Making with Big Data; Statistical Relational AI; and The World Wide Web and Public Health Intelligence. This article presents short summaries of those events.

    @article{lirolem22215,
              volume = {36},
              number = {1},
               month = {January},
              author = {Stefano Albrecht and Andr{\'e} da Motta Salles Barreto and Darius Braziunas and David Buckeridge and Heriberto Cuayahuitl},
               title = {Reports of the AAAI 2014 Conference Workshops},
           publisher = {Association for the Advancemant of Artificial Intelligence},
                year = {2015},
             journal = {AI Magazine},
               pages = {87--98},
            keywords = {ARRAY(0x7fdc7816ba98)},
                 url = {http://eprints.lincoln.ac.uk/22215/},
            abstract = {The AAAI-14 Workshop program was held Sunday and Monday, July 27?28, 2012, at the Qu{\'e}bec City Convention Centre in Qu{\'e}bec, Canada. Canada. The AAAI-14 workshop program included fifteen workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Robotics; Artificial Intelligence Applied to Assistive Technologies and Smart Environments; Cognitive Computing for Augmented Human Intelligence; Computer Poker and Imperfect Information; Discovery Informatics; Incentives and Trust in Electronic Communities; Intelligent Cinematography and Editing; Machine Learning for Interactive Systems: Bridging the Gap between Perception, Action and Communication; Modern Artificial Intelligence for Health Analytics; Multiagent Interaction without Prior Coordination; Multidisciplinary Workshop on Advances in Preference Handling; Semantic Cities {--} Beyond Open Data to Models, Standards and Reasoning; Sequential Decision Making with Big Data; Statistical Relational AI; and The World Wide Web and Public Health Intelligence. This article presents short summaries of those events.}
    }
  • P. Ardin, M. Mangan, A. Wystrach, and B. Webb, “How variation in head pitch could affect image matching algorithms for ant navigation,” Journal of Comparative Physiology A, vol. 201, iss. 6, pp. 585-597, 2015.
    [BibTeX] [Abstract] [EPrints]

    Desert ants are a model system for animal navigation, using visual memory to follow long routes across both sparse and cluttered environments. Most accounts of this behaviour assume retinotopic image matching, e.g. recovering heading direction by finding a minimum in the image difference function as the viewpoint rotates. But most models neglect the potential image distortion that could result from unstable head motion. We report that for ants running across a short section of natural substrate, the head pitch varies substantially: by over 20 degrees with no load; and 60 degrees when carrying a large food item. There is no evidence of head stabilisation. Using a realistic simulation of the ant?s visual world, we demonstrate that this range of head pitch significantly degrades image matching. The effect of pitch variation can be ameliorated by a memory bank of densely sampled along a route so that an image sufficiently similar in pitch and location is available for comparison. However, with large pitch disturbance, inappropriate memories sampled at distant locations are often recalled and navigation along a route can be adversely affected. Ignoring images obtained at extreme pitches, or averaging images over several pitches, does not significantly improve performance.

    @article{lirolem23586,
              volume = {201},
              number = {6},
               month = {June},
              author = {Paul Ardin and Michael Mangan and Antoine Wystrach and Barbara Webb},
               title = {How variation in head pitch could affect image matching algorithms for ant navigation},
           publisher = {Springer Berlin Heidelberg},
                year = {2015},
             journal = {Journal of Comparative Physiology A},
               pages = {585--597},
            keywords = {ARRAY(0x7fdc7816b768)},
                 url = {http://eprints.lincoln.ac.uk/23586/},
            abstract = {Desert ants are a model system for animal navigation, using visual memory to follow long routes across both sparse and cluttered environments. Most accounts of this behaviour assume retinotopic image matching, e.g. recovering heading direction by finding a minimum in the image difference function as the viewpoint rotates. But most models neglect the potential image distortion that could result from unstable head motion. We report that for ants running across a short section of natural substrate, the head pitch varies substantially: by over 20 degrees with no load; and 60 degrees when carrying a large food item. There is no evidence of head stabilisation. Using a realistic simulation of the ant?s visual world, we demonstrate that this range of head pitch significantly degrades image matching. The effect of pitch variation can be ameliorated by a memory bank of densely sampled along a route so that an image sufficiently similar in pitch and location is available for comparison. However, with large pitch disturbance, inappropriate memories sampled at distant locations are often recalled and navigation along a route can be adversely affected. Ignoring images obtained at extreme pitches, or averaging images over several pitches, does not significantly improve performance.}
    }
  • F. Arvin, T. Krajnik, A. E. Turgut, and S. Yue, “COS-\ensuremath\Phi: artificial pheromone system for robotic swarms research,” IEEE/RSJ International Conference on Intelligent Robots and Systems 2015, 2015.
    [BibTeX] [Abstract] [EPrints]

    Pheromone-based communication is one of the most effective ways of communication widely observed in nature. It is particularly used by social insects such as bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its effectiveness; artificial pheromones have been adopted in multi-robot and swarm robotic systems for more than a decade. Although, pheromone-based communication was implemented by different means like chemical (use of particular chemical compounds) or physical (RFID tags, light, sound) ways, none of them were able to replicate all the aspects of pheromones as seen in nature. In this paper, we propose a novel artificial pheromone system that is reliable, accurate and it uses off-the-shelf components only — LCD screen and low-cost USB camera. The system allows to simulate several pheromones and their interactions and to change parameters of the pheromones (diffusion, evaporation, etc.) on the fly allowing for controllable experiments. We tested the performance of the system using the Colias platform in single-robot and swarm scenarios. To allow the swarm robotics community to use the system for their research, we provide it as a freely available open-source package.

    @article{lirolem17957,
               month = {September},
               title = {COS-{\ensuremath{\Phi}}: artificial pheromone system for robotic swarms research},
              author = {Farshad Arvin and Tomas Krajnik and Ali Emre Turgut and Shigang Yue},
           publisher = {IEEE},
                year = {2015},
                note = {Conference:
    2015 IEEE/RSJ International Conference on  Intelligent Robots and Systems (IROS 2015), 28 September - 2 October 2015, Hamburg, Germany},
             journal = {IEEE/RSJ International Conference on Intelligent Robots and Systems 2015},
            keywords = {ARRAY(0x7fdc78166cb0)},
                 url = {http://eprints.lincoln.ac.uk/17957/},
            abstract = {Pheromone-based communication is one of the most effective ways of communication widely observed in nature. It is particularly used by social insects such as bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its effectiveness; artificial pheromones have been adopted in multi-robot and swarm robotic systems for more than a decade. Although, pheromone-based communication was implemented by different means like chemical (use of particular chemical compounds) or physical (RFID tags, light, sound) ways, none of them were able to replicate all the aspects of pheromones as seen in nature. In this paper, we propose a novel artificial pheromone system that is reliable, accurate and it uses off-the-shelf components only -- LCD screen and low-cost USB camera. The system allows to simulate several pheromones and their interactions and to change parameters of the pheromones (diffusion, evaporation, etc.) on the fly allowing for controllable experiments. We tested the performance of the system using the Colias platform in single-robot and swarm scenarios. To allow the swarm robotics community to use the system for their research, we provide it as a freely available open-source package.}
    }
  • F. Arvin, C. Xiong, and S. Yue, “Colias-\ensuremath\Phi: an autonomous micro robot for artificial pheromone communication,” International Journal of Mechanical Engineering and Robotics Research, vol. 4, iss. 4, pp. 349-353, 2015.
    [BibTeX] [Abstract] [EPrints]

    Ants pheromone communication is an efficient mechanism which took inspiration from nature. It has been used in various artificial intelligence and multi robotics researches. This paper presents the development of an autonomous micro robot to be used in swarm robotic researches especially in pheromone based communication systems. The robot is an extended version of Colias micro robot with capability of decoding and following artificial pheromone trails. We utilize a low-cost experimental setup to implement pheromone-based scenarios using a flat LCD screen and a USB camera. The results of the performed experiments with group of robots demonstrated the feasibility of Colias-\ensuremath\Phi to be used in pheromone based experiments.

    @article{lirolem19405,
              volume = {4},
              number = {4},
               month = {October},
              author = {Farshad Arvin and Caihua Xiong and Shigang Yue},
               title = {Colias-{\ensuremath{\Phi}}: an autonomous micro robot for artificial pheromone communication},
             journal = {International Journal of Mechanical Engineering and Robotics Research},
               pages = {349--353},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166c80)},
                 url = {http://eprints.lincoln.ac.uk/19405/},
            abstract = {Ants pheromone communication is an efficient mechanism which took inspiration from nature. It has been used in various artificial intelligence and multi robotics researches. This paper presents the development of an autonomous micro robot to be used in swarm robotic researches especially in pheromone based communication systems. The robot is an extended version of Colias micro robot with capability of decoding and following artificial pheromone trails. We utilize a low-cost experimental setup to implement pheromone-based scenarios using a flat LCD screen and a USB camera. The results of the performed experiments with group of robots demonstrated the feasibility of Colias-{\ensuremath{\Phi}} to be used in pheromone based experiments.}
    }
  • F. Arvin, R. Attar, A. E. Turgut, and S. Yue, “Power-law distribution of long-term experimental data in swarm robotics,” in International Conference on Swarm Intelligence, 2015, pp. 551-559.
    [BibTeX] [Abstract] [EPrints]

    Bio-inspired aggregation is one of the most fundamental behaviours that has been studied in swarm robotic for more than two decades. Biology revealed that the environmental characteristics are very important factors in aggregation of social insects and other animals. In this paper, we study the effects of different environmental factors such as size and texture of aggregation cues using real robots. In addition, we propose a mathematical model to predict the behaviour of the aggregation during an experiment.

    @inproceedings{lirolem17627,
           booktitle = {International Conference on Swarm Intelligence},
               month = {June},
               title = {Power-law distribution of long-term experimental data in swarm robotics},
              author = {Farshad Arvin and Rahman Attar and Ali Emre Turgut and Shigang Yue},
           publisher = {Springer},
                year = {2015},
               pages = {551--559},
            keywords = {ARRAY(0x7fdc7816b738)},
                 url = {http://eprints.lincoln.ac.uk/17627/},
            abstract = {Bio-inspired aggregation is one of the most fundamental behaviours that has been 
    studied in swarm robotic for more than two decades. Biology revealed that the 
    environmental characteristics are very important factors in aggregation of social insects and 
    other animals. In this paper, we study the effects of different environmental factors such as 
    size and texture of aggregation cues using real robots. In addition, we propose a 
    mathematical model to predict the behaviour of the aggregation during an experiment.}
    }
  • W. Chen, C. Xiong, and S. Yue, “Mechanical implementation of kinematic synergy for continual grasping generation of anthropomorphic hand,” IEEE/ASME Transactions on Mechatronics, vol. 20, iss. 3, pp. 1249-1263, 2015.
    [BibTeX] [Abstract] [EPrints]

    The synergy-based motion generation of current anthropomorphic hands generally employ the static posture synergy, which is extracted from quantities of joint trajectory, to design the mechanism or control strategy. Under this framework, the temporal weight sequences of each synergy from pregrasp phase to grasp phase are required for reproducing any grasping task. Moreover, the zero-offset posture has to be preset before starting any grasp. Thus, the whole grasp phase appears to be unlike natural human grasp. Up until now, no work in the literature addresses these issues toward simplifying the continual grasp by only inputting the grasp pattern. In this paper, the kinematic synergies observed in angular velocity profile are employed to design the motion generation mechanism. The kinematic synergy extracted from quantities of grasp tasks is implemented by the proposed eigen cam group in tendon space. The completely continual grasp from the fully extending posture only require averagely rotating the two eigen cam groups one cycle. The change of grasp pattern only depends on respecifying transmission ratio pair for the two eigen cam groups. An illustrated hand prototype is developed based on the proposed design principle and the grasping experiments demonstrate the feasibility of the design method. The potential applications include the prosthetic hand that is controlled by the classified pattern from the bio-signal.

    @article{lirolem17879,
              volume = {20},
              number = {3},
               month = {June},
              author = {Wenbin Chen and Caihua Xiong and Shigang Yue},
               title = {Mechanical implementation of kinematic synergy for continual grasping generation of anthropomorphic hand},
           publisher = {IEEE},
                year = {2015},
             journal = {IEEE/ASME Transactions on Mechatronics},
               pages = {1249--1263},
            keywords = {ARRAY(0x7fdc78166e60)},
                 url = {http://eprints.lincoln.ac.uk/17879/},
            abstract = {The synergy-based motion generation of current anthropomorphic hands generally employ the static posture synergy, which is extracted from quantities of joint trajectory, to design the mechanism or control strategy. Under this framework, the temporal weight sequences of each synergy from pregrasp phase to grasp phase are required for reproducing any grasping task. Moreover, the zero-offset posture has to be preset before starting any grasp. Thus, the whole grasp phase appears to be unlike natural human grasp. Up until now, no work in the literature addresses these issues toward simplifying the continual grasp by only inputting the grasp pattern. In this paper, the kinematic synergies observed in angular velocity profile are employed to design the motion generation mechanism. The kinematic synergy extracted from quantities of grasp tasks is implemented by the proposed eigen cam group in tendon space. The completely continual grasp from the fully extending posture only require averagely rotating the two eigen cam groups one cycle. The change of grasp pattern only depends on respecifying transmission ratio pair for the two eigen cam groups. An illustrated hand prototype is developed based on the proposed design principle and the grasping experiments demonstrate the feasibility of the design method. The potential applications include the prosthetic hand that is controlled by the classified pattern from the bio-signal.}
    }
  • C. Coppola, O. M. Mozos, and N. Bellotto, “Applying a 3D qualitative trajectory calculus to human action recognition using depth cameras,” in IEEE/RSJ IROS Workshop on Assistance and Service Robotics in a Human Environment, 2015.
    [BibTeX] [Abstract] [EPrints]

    The life span of ordinary people is increasing steadily and many developed countries are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Monitoring human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. Towards this long term goal, the research here presented is a first step to monitor ADLs using 3D sensors in an Ambient Assisted Living (AAL) environment. In particular, the work here presented adopts a new 3D Qualitative Trajectory Calculus (QTC3D) to represent human actions that belong to such activities, designing and implementing a set of computational tools (i.e. Hidden Markov Models) to learn and classify them from standard datasets. Preliminary results show the good performance of our system and its potential application to a large number of scenarios, including mobile robots for AAL.

    @inproceedings{lirolem18477,
           booktitle = {IEEE/RSJ IROS Workshop on Assistance and Service Robotics in a Human Environment},
               month = {October},
               title = {Applying a 3D qualitative trajectory calculus to human action recognition using depth cameras},
              author = {Claudio Coppola and Oscar Martinez Mozos and Nicola Bellotto},
           publisher = {IEEE},
                year = {2015},
                note = {2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
            keywords = {ARRAY(0x7fdc78166c50)},
                 url = {http://eprints.lincoln.ac.uk/18477/},
            abstract = {The life span of ordinary people is increasing steadily and many developed countries are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Monitoring human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. Towards this long term goal, the research here presented is a first step to monitor ADLs using 3D sensors in an Ambient Assisted Living (AAL) environment. In particular, the work here presented adopts a new 3D Qualitative Trajectory Calculus (QTC3D) to represent human actions that belong to such activities, designing and implementing a set of computational tools (i.e. Hidden Markov Models) to learn and classify them from standard datasets. Preliminary results show the good performance of our system and its potential application to a large number of scenarios, including mobile robots for AAL.}
    }
  • H. Cuayahuitl, K. Komatani, and G. Skantze, “Introduction for speech and language for interactive robots,” Computer Speech & Language, vol. 34, iss. 1, pp. 83-86, 2015.
    [BibTeX] [Abstract] [EPrints]

    This special issue includes research articles which apply spoken language processing to robots that interact with human users through speech, possibly combined with other modalities. Robots that can listen to human speech, understand it, interact according to the conveyed meaning, and respond represent major research and technological challenges. Their common aim is to equip robots with natural interaction abilities. However, robotics and spoken language processing are areas that are typically studied within their respective communities with limited communication across disciplinary boundaries. The articles in this special issue represent examples that address the need for an increased multidisciplinary exchange of ideas.

    @article{lirolem22214,
              volume = {34},
              number = {1},
               month = {November},
              author = {Heriberto Cuayahuitl and Kazunori Komatani and Gabriel Skantze},
               title = {Introduction for speech and language for interactive robots},
           publisher = {Elsevier for International Speech Communication Association (ISCA)},
                year = {2015},
             journal = {Computer Speech \& Language},
               pages = {83--86},
            keywords = {ARRAY(0x7fdc78166b90)},
                 url = {http://eprints.lincoln.ac.uk/22214/},
            abstract = {This special issue includes research articles which apply spoken language processing to robots that interact with human users through speech, possibly combined with other modalities. Robots that can listen to human speech, understand it, interact according to the conveyed meaning, and respond represent major research and technological challenges. Their common aim is to equip robots with natural interaction abilities. However, robotics and spoken language processing are areas that are typically studied within their respective communities with limited communication across disciplinary boundaries. The articles in this special issue represent examples that address the need for an increased multidisciplinary exchange of ideas.}
    }
  • H. Cuayahuitl, S. Keizer, and O. Lemon, “Strategic dialogue management via deep reinforcement learning,” in NIPS Workshop on Deep Reinforcement Learning, 2015.
    [BibTeX] [Abstract] [EPrints]

    Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53\% win rate versus 3 automated players (`bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27\%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.

    @inproceedings{lirolem25994,
           booktitle = {NIPS Workshop on Deep Reinforcement Learning},
              volume = {abs/16},
               title = {Strategic dialogue management via deep reinforcement learning},
              author = {Heriberto Cuayahuitl and Simon Keizer and Oliver Lemon},
           publisher = {arXiv},
                year = {2015},
             journal = {CoRR},
            keywords = {ARRAY(0x7fdc7816bac8)},
                 url = {http://eprints.lincoln.ac.uk/25994/},
            abstract = {Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53\% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27\%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.}
    }
  • C. Dondrup, N. Bellotto, F. Jovan, and M. Hanheide, “Real-time multisensor people tracking for human-robot spatial interaction,” in Workshop on Machine Learning for Social Robotics at ICRA 2015, 2015.
    [BibTeX] [Abstract] [EPrints]

    All currently used mobile robot platforms are able to navigate safely through their environment, avoiding static and dynamic obstacles. However, in human populated environments mere obstacle avoidance is not sufficient to make humans feel comfortable and safe around robots. To this end, a large community is currently producing human-aware navigation approaches to create a more socially acceptable robot behaviour. Amajorbuilding block for all Human-Robot Spatial Interaction is the ability of detecting and tracking humans in the vicinity of the robot. We present a fully integrated people perception framework, designed to run in real-time on a mobile robot. This framework employs detectors based on laser and RGB-D data and a tracking approach able to fuse multiple detectors using different versions of data association and Kalman filtering. The resulting trajectories are transformed into Qualitative Spatial Relations based on a Qualitative Trajectory Calculus, to learn and classify different encounters using a Hidden Markov Model based representation. We present this perception pipeline, which is fully implemented into the Robot Operating System (ROS), in a small proof of concept experiment. All components are readily available for download, and free to use under the MIT license, to researchers in all fields, especially focussing on social interaction learning by providing different kinds of output, i.e. Qualitative Relations and trajectories.

    @inproceedings{lirolem17545,
           booktitle = {Workshop on Machine Learning for Social Robotics at ICRA 2015},
               month = {May},
               title = {Real-time multisensor people tracking for human-robot spatial interaction},
              author = {Christian Dondrup and Nicola Bellotto and Ferdian Jovan and Marc Hanheide},
           publisher = {ICRA / IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b828)},
                 url = {http://eprints.lincoln.ac.uk/17545/},
            abstract = {All currently used mobile robot platforms are able to navigate safely through their environment, avoiding static and dynamic obstacles. However, in human populated environments mere obstacle avoidance is not sufficient to make humans feel comfortable and safe around robots. To this end, a large community is currently producing human-aware navigation approaches to create a more socially acceptable robot behaviour. Amajorbuilding block for all Human-Robot Spatial Interaction is the ability of detecting and tracking humans in the vicinity of the robot. We present a fully integrated people perception framework, designed to run in real-time on a mobile robot. This framework employs detectors based on laser and RGB-D data and a tracking approach able to fuse multiple detectors using different versions of data association and Kalman filtering. The resulting trajectories are transformed into Qualitative Spatial Relations based on a Qualitative Trajectory Calculus, to learn and classify different encounters using a Hidden Markov Model based representation. We present this perception pipeline, which is fully implemented into the Robot Operating System (ROS), in a small proof of concept experiment. All components are readily available for download, and free to use under the MIT license, to researchers in all fields, especially focussing on social interaction learning by providing different kinds of output, i.e. Qualitative Relations and trajectories.}
    }
  • C. Dondrup, N. Bellotto, M. Hanheide, K. Eder, and U. Leonards, “A computational model of human-robot spatial interactions based on a qualitative trajectory calculus,” Robotics, vol. 4, iss. 1, pp. 63-102, 2015.
    [BibTeX] [Abstract] [EPrints]

    In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is to utilise QTC as a state descriptor and model HRSI as a probabilistic sequence of such states. Apart from the sole direction of movements of human and robot modelled by QTC, attributes of HRSI like proxemics and velocity profiles play vital roles for the modelling and generation of HRSI behaviour. In this paper, we particularly present how the concept of proxemics can be embedded in QTC to facilitate richer models. To facilitate reasoning on HRSI with qualitative representations, we show how we can combine the representational power of QTC with the concept of proxemics in a concise framework, enriching our probabilistic representation by implicitly modelling distances. We show the appropriateness of our sequential model of QTC by encoding different HRSI behaviours observed in two spatial interaction experiments. We classify these encounters, creating a comparative measurement, showing the representational capabilities of the model.

    @article{lirolem16987,
              volume = {4},
              number = {1},
               month = {March},
              author = {Christian Dondrup and Nicola Bellotto and Marc Hanheide and Kerstin Eder and Ute Leonards},
                note = {This article belongs to the Special Issue Representations and Reasoning for Robotics},
               title = {A computational model of human-robot spatial interactions based on a qualitative trajectory calculus},
           publisher = {MDPI},
                year = {2015},
             journal = {Robotics},
               pages = {63--102},
            keywords = {ARRAY(0x7fdc7816ba08)},
                 url = {http://eprints.lincoln.ac.uk/16987/},
            abstract = {In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is to utilise QTC as a state descriptor and model HRSI as a probabilistic sequence of such states. Apart from the sole direction of movements of human and robot modelled by QTC, attributes of HRSI like proxemics and velocity profiles play vital roles for the modelling and generation of HRSI behaviour. In this paper, we particularly present how the concept of proxemics can be embedded in QTC to facilitate richer models. To facilitate reasoning on HRSI with qualitative representations, we show how we can combine the representational power of QTC with the concept of proxemics in a concise framework, enriching our probabilistic representation by implicitly modelling distances. We show the appropriateness of our sequential model of QTC by encoding different HRSI behaviours observed in two spatial interaction experiments. We classify these encounters, creating a comparative measurement, showing the representational capabilities of the model.}
    }
  • J. P. Fentanes, B. Lacerda, T. Krajnik, N. Hawes, and M. Hanheide, “Now or later? Predicting and maximising success of navigation actions from long-term experience,” in 2015 IEEE International Conference on Robotics and Automation (ICRA 2015), 2015.
    [BibTeX] [Abstract] [EPrints]

    In planning for deliberation or navigation in real-world robotic systems, one of the big challenges is to cope with change. It lies in the nature of planning that it has to make assumptions about the future state of the world, and the robot?s chances of successively accomplishing actions in this future. Hence, a robot?s plan can only be as good as its predictions about the world. In this paper, we present a novel approach to specifically represent changes that stem from periodic events in the environment (e.g. a door being opened or closed), which impact on the success probability of planned actions. We show that our approach to model the probability of action success as a set of superimposed periodic processes allows the robot to predict action outcomes in a long-term data obtained in two real-life offices better than a static model. We furthermore discuss and showcase how this knowledge gathered can be successfully employed in a probabilistic planning framework to devise better navigation plans. The key contributions of this paper are (i) the formation of the spectral model of action outcomes from non-uniform sampling, the (ii) analysis of its predictive power using two long-term datasets, and (iii) the application of the predicted outcomes in an MDP-based planning framework.

    @inproceedings{lirolem17745,
           booktitle = {2015 IEEE International Conference on Robotics and Automation (ICRA 2015)},
               month = {May},
               title = {Now or later? Predicting and maximising success of navigation actions from long-term experience},
              author = {Jaime Pulido Fentanes and Bruno Lacerda and Tomas Krajnik and Nick Hawes and Marc Hanheide},
           publisher = {IEEE/RAS},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b8e8)},
                 url = {http://eprints.lincoln.ac.uk/17745/},
            abstract = {In planning for deliberation or navigation in real-world robotic systems, one of the big challenges is to cope with change. It lies in the nature of planning that it has to make assumptions about the future state of the world, and the robot?s chances of successively accomplishing actions in this future.
    Hence, a robot?s plan can only be as good as its predictions about the world. In this paper, we present a novel approach to specifically represent changes that stem from periodic events in the environment (e.g. a door being opened or closed), which impact on the success probability of planned actions. We show that our approach to model the probability of action success as a set of superimposed periodic processes allows the robot to predict action outcomes in a long-term data obtained in two real-life offices better than a static model. We furthermore discuss and showcase how this knowledge gathered can be successfully employed in a probabilistic planning framework to devise better navigation plans. The key contributions of this paper are (i) the formation of the spectral model of action outcomes from non-uniform sampling, the (ii) analysis of its predictive power using two long-term datasets, and (iii) the application of the predicted outcomes in an MDP-based planning framework.}
    }
  • Q. Fu and S. Yue, “Modelling LGMD2 visual neuron system,” in 2015 IEEE International Workshop on Machine Learning for Signal Processing, 2015.
    [BibTeX] [Abstract] [EPrints]

    Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bioinspired nonlinear functions, are introduced in our model, to achieve LGMD2?s collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition.

    @inproceedings{lirolem24940,
           booktitle = {2015 IEEE International Workshop on Machine Learning for Signal Processing},
               month = {September},
               title = {Modelling LGMD2 visual neuron system},
              author = {Qinbing Fu and Shigang Yue},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166d40)},
                 url = {http://eprints.lincoln.ac.uk/24940/},
            abstract = {Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bioinspired nonlinear functions, are introduced in our model, to achieve LGMD2?s collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition.}
    }
  • P. Gallina, N. Bellotto, and M. D. Luca, “Progressive co-adaptation in human-machine interaction,” in 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2015), 2015.
    [BibTeX] [Abstract] [EPrints]

    In this paper we discuss the concept of co-adaptation between a human operator and a machine interface and we summarize its application with emphasis on two different domains, teleoperation and assistive technology. The analysis of the literature reveals that only in few cases the possibility of a temporal evolution of the co-adaptation parameters has been considered. In particular, it has been overlooked the role of time-related indexes that capture changes in motor and cognitive abilities of the human operator. We argue that for a more effective long-term co-adaptation process, the interface should be able to predict and adjust its parameters according to the evolution of human skills and performance. We thus propose a novel approach termed progressive co-adaptation, whereby human performance is continuously monitored and the system makes inferences about changes in the users’ cognitive and motor skills. We illustrate the features of progressive co-adaptation in two possible applications, robotic telemanipulation and active vision for the visually impaired.

    @inproceedings{lirolem17501,
           booktitle = {12th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2015)},
               month = {July},
               title = {Progressive co-adaptation in human-machine interaction},
              author = {Paolo Gallina and Nicola Bellotto and Massimiliano Di Luca},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166e00)},
                 url = {http://eprints.lincoln.ac.uk/17501/},
            abstract = {In this paper we discuss the concept of co-adaptation between a human operator and a machine interface and we summarize its application with emphasis on two different domains, teleoperation and assistive technology. The analysis of the literature reveals that only in few cases the possibility of a temporal evolution of the co-adaptation parameters has been considered. In particular, it has been overlooked the role of time-related indexes that capture changes in motor and cognitive abilities of the human operator. We argue that for a more effective long-term co-adaptation process, the interface should be able to predict and adjust its parameters according to the evolution of human skills and performance. We thus propose a novel approach termed progressive co-adaptation, whereby human performance is continuously monitored and the system makes inferences about changes in the users' cognitive and motor skills. We illustrate the features of progressive co-adaptation in two possible applications, robotic telemanipulation and active vision for the visually impaired.}
    }
  • Y. Gao, J. Peng, S. Yue, and Y. Zhao, “On the null space property of lq -minimization for 0\ensuremath<q$łeq$1 in compressed sensing,” Journal of Function Spaces, vol. 2015, p. 579853, 2015.
    [BibTeX] [Abstract] [EPrints]

    The paper discusses the relationship between the null space property (NSP) and the lq-minimization in compressed sensing. Several versions of the null space property, that is, the lq stable NSP, the lq robust NSP, and the lq,p robust NSP for 0\ensuremath<p$łeq$q\ensuremath<1 based on the standard lq NSP, are proposed, and their equivalent forms are derived. Consequently, reconstruction results for the lq-minimization can be derived easily under the NSP condition and its equivalent form. Finally, the lq NSP is extended to the lq-synthesis modeling and the mixed l2/lq-minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively. \copyright 2015 Yi Gao et al

    @article{lirolem17374,
              volume = {2015},
              author = {Yi Gao and Jigen Peng and Shigang Yue and Yuan Zhao},
                note = {This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
    Journal Title History
    Journal of Function Spaces 2014?Current
    Journal of Function Spaces and Applications 2003?2013 (Title Changed)  (ISSN 2090-8997, eISSN 0972-6802)},
               title = {On the null space property of lq -minimization for 0{\ensuremath{<}}q{$\leq$}1 in compressed sensing},
           publisher = {Hindawi Publishing Corporation},
             journal = {Journal of Function Spaces},
               pages = {579853},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816baf8)},
                 url = {http://eprints.lincoln.ac.uk/17374/},
            abstract = {The paper discusses the relationship between the null space property (NSP) and the lq-minimization in compressed sensing. Several versions of the null space property, that is, the lq stable NSP, the lq robust NSP, and the lq,p robust NSP for 0{\ensuremath{<}}p{$\leq$}q{\ensuremath{<}}1 based on the standard lq NSP, are proposed, and their equivalent forms are derived. Consequently, reconstruction results for the lq-minimization can be derived easily under the NSP condition and its equivalent form. Finally, the lq NSP is extended to the lq-synthesis modeling and the mixed l2/lq-minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively. {\copyright} 2015 Yi Gao et al}
    }
  • Y. Gao, W. Wang, and S. Yue, “On the rate of convergence by generalized Baskakov operators,” Advances in Mathematical Physics, vol. 2015, p. 564854, 2015.
    [BibTeX] [Abstract] [EPrints]

    We firstly construct generalized Baskakov operators V n, \ensuremath\alpha, q (f; x) and their truncated sum B n, \ensuremath\alpha, q (f; \ensuremath\gamma n, x). Secondly, we study the pointwise convergence and the uniform convergence of the operators V n, \ensuremath\alpha, q (f; x), respectively, and estimate that the rate of convergence by the operators V n, \ensuremath\alpha, q (f; x) is 1 / n q / 2. Finally, we study the convergence by the truncated operators B n, \ensuremath\alpha, q (f; \ensuremath\gamma n, x) and state that the finite truncated sum B n, \ensuremath\alpha, q (f; \ensuremath\gamma n, x) can replace the operators V n, \ensuremath\alpha, q (f; x) in the computational point of view provided that l i m n $\rightarrow$ ? n \ensuremath\gamma n = ?. \copyright 2015 Yi Gao et al.

    @article{lirolem17367,
              volume = {2015},
               month = {May},
              author = {Yi Gao and W. Wang and Shigang Yue},
                note = {This is an open access article distributed under the Creative Commons Attribution License, which
    permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.},
               title = {On the rate of convergence by generalized Baskakov operators},
           publisher = {Hindawi Publishing Corporation},
                year = {2015},
             journal = {Advances in Mathematical Physics},
               pages = {564854},
            keywords = {ARRAY(0x7fdc7816b978)},
                 url = {http://eprints.lincoln.ac.uk/17367/},
            abstract = {We firstly construct generalized Baskakov operators V n, {\ensuremath{\alpha}}, q (f; x) and their truncated sum B n, {\ensuremath{\alpha}}, q (f; {\ensuremath{\gamma}} n, x). Secondly, we study the pointwise convergence and the uniform convergence of the operators V n, {\ensuremath{\alpha}}, q (f; x), respectively, and estimate that the rate of convergence by the operators V n, {\ensuremath{\alpha}}, q (f; x) is 1 / n q / 2. Finally, we study the convergence by the truncated operators B n, {\ensuremath{\alpha}}, q (f; {\ensuremath{\gamma}} n, x) and state that the finite truncated sum B n, {\ensuremath{\alpha}}, q (f; {\ensuremath{\gamma}} n, x) can replace the operators V n, {\ensuremath{\alpha}}, q (f; x) in the computational point of view provided that l i m n {$\rightarrow$} ? n {\ensuremath{\gamma}} n = ?. {\copyright} 2015 Yi Gao et al.}
    }
  • P. Graham and M. Mangan, “Insect navigation: do ants live in the now?,” Journal of Experimental Biology, vol. 218, iss. 6, pp. 819-823, 2015.
    [BibTeX] [Abstract] [EPrints]

    Visual navigation is a critical behaviour formanyanimals, and it has been particularly well studied in ants. Decades of ant navigation research have uncovered many ways in which efficient navigation can be implemented in small brains. For example, ants show us how visual information can drive navigation via procedural rather than map-like instructions. Two recent behavioural observations highlight interesting adaptive ways in which ants implement visual guidance. Firstly, it has been shownthat the systematic nest searches of ants can be biased by recent experience of familiar scenes. Secondly, ants have been observed to show temporary periods of confusion when asked to repeat a route segment, even if that route segment is very familiar. Taken together, these results indicate that the navigational decisions of ants take into account their recent experiences as well as the currently perceived environment.

    @article{lirolem23585,
              volume = {218},
              number = {6},
               month = {March},
              author = {Paul Graham and Michael Mangan},
               title = {Insect navigation: do ants live in the now?},
           publisher = {Company of Biologists},
                year = {2015},
             journal = {Journal of Experimental Biology},
               pages = {819--823},
            keywords = {ARRAY(0x7fdc7816ba38)},
                 url = {http://eprints.lincoln.ac.uk/23585/},
            abstract = {Visual navigation is a critical behaviour formanyanimals, and it has been
    particularly well studied in ants. Decades of ant navigation research have
    uncovered many ways in which efficient navigation can be implemented
    in small brains. For example, ants show us how visual information can
    drive navigation via procedural rather than map-like instructions. Two
    recent behavioural observations highlight interesting adaptive ways in
    which ants implement visual guidance. Firstly, it has been shownthat the
    systematic nest searches of ants can be biased by recent experience of
    familiar scenes. Secondly, ants have been observed to show temporary
    periods of confusion when asked to repeat a route segment, even if that
    route segment is very familiar. Taken together, these results indicate that
    the navigational decisions of ants take into account their recent
    experiences as well as the currently perceived environment.}
    }
  • E. Gyebi, M. Hanheide, and G. Cielniak, “Educational robotics for teaching computer science in Africa – pilot study,” in WONDER 2015, First International Workshop on Educational Robotics, 2015.
    [BibTeX] [Abstract] [EPrints]

    Educational robotics can play a key role in addressing some of the challenges faced by higher education institutions in Africa. A remaining and open question is related to effectiveness of activities involving educational robots for teaching but also for improving learner’s experience. This paper addresses that question by evaluating a short pilot study which introduced students at the Department of Computer Science, University of Ghana to robot programming. The initial positive results from the study indicate a potential for such activities to enhance teaching experience and practice at African institutions. The proposed integrated set-up including robotic hardware, software and educational tasks was effective and will form a solid base for a future, full scale integration of robotic activities into the undergraduate curricula at this particular institution. This evaluation should be valuable to other educators integrating educational robots into undergraduate curricula in developing countries and elsewhere.

    @inproceedings{lirolem19407,
           booktitle = {WONDER 2015, First International Workshop on Educational Robotics},
               month = {October},
               title = {Educational robotics for teaching computer science in Africa - pilot study},
              author = {Ernest Gyebi and Marc Hanheide and Grzegorz Cielniak},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166bc0)},
                 url = {http://eprints.lincoln.ac.uk/19407/},
            abstract = {Educational robotics can play a key role in addressing some of the challenges faced by higher education institutions in Africa. A remaining and open question is related to effectiveness of activities involving educational robots for teaching but also for improving learner's experience. This paper addresses that question by evaluating a short pilot study which introduced students at the Department of Computer Science, University of Ghana to robot programming. The initial positive results from the study indicate a potential for such activities to enhance teaching experience and practice at African institutions. The proposed integrated set-up including robotic hardware, software and educational tasks was effective and will form a solid base for a future, full scale integration of robotic activities into the undergraduate curricula at this particular institution. This evaluation should be valuable to other educators integrating educational robots into undergraduate curricula in developing countries and elsewhere.}
    }
  • E. Gyebi, M. Hanheide, and G. Cielniak, “Affordable mobile robotic platforms for teaching computer science at African universities,” in 6th International Conference on Robotics in Education, 2015.
    [BibTeX] [Abstract] [EPrints]

    Educational robotics can play a key role in addressing some of the challenges faced by higher education in Africa. One of the major obstacles preventing a wider adoption of initiatives involving educational robotics in this part of the world is lack of robots that would be affordable by African institutions. In this paper, we present a survey and analysis of currently available affordable mobile robots and their suitability for teaching computer science at African universities. To this end, we propose a set of assessment criteria and review a number of platforms costing an order of magnitude less than the existing popular educational robots. Our analysis identifies suitable candidates offering contrasting features and benefits. We also discuss potential issues and promising directions which can be considered by both educators in Africa but also designers and manufacturers of future robot platforms.

    @inproceedings{lirolem17557,
           booktitle = {6th International Conference on Robotics in Education},
               month = {May},
               title = {Affordable mobile robotic platforms for teaching computer science at African universities},
              author = {Ernest Gyebi and Marc Hanheide and Grzegorz Cielniak},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b948)},
                 url = {http://eprints.lincoln.ac.uk/17557/},
            abstract = {Educational robotics can play a key role in addressing some of the challenges faced by higher education in Africa. One of the major obstacles preventing a wider adoption of initiatives involving educational robotics in this part of the world is lack of robots that would be affordable by African institutions. In this paper, we present a survey and analysis of currently available affordable mobile robots and their suitability for teaching computer science at African universities. To this end, we propose a set of assessment criteria and review a number of platforms costing an order of magnitude less than the existing popular educational robots. Our analysis identifies suitable candidates offering contrasting features and benefits. We also discuss potential issues and promising directions which can be considered by both educators in Africa but also designers and manufacturers of future robot platforms.}
    }
  • E. Gyebi, F. Arvin, M. Hanheide, S. Yue, and G. Cielniak, “Colias: towards an affordable mobile robot for education in developing countries,” in Developing Countries Forum at ICRA 2015, 2015.
    [BibTeX] [Abstract] [EPrints]

    Educational robotics can play a key role in addressing some of the important challenges faced by higher education in developing countries. One of the major obstacles preventing a wider adoption of initiatives involving educational robotics in these parts of the world is a lack of robot platforms which would be affordable for the local educational institutions. In this paper, we present our inexpensive mobile robot platform Colias and assess its potential for education in developing countries. To this end, we describe hardware and software components of the robot, assess its suitability for education and discuss the missing features which will need to be developed to turn Colias into a fully featured educational platform. The presented robot is one of the key components of our current efforts in popularising educational robotics at African universities.

    @inproceedings{lirolem17558,
           booktitle = {Developing Countries Forum at ICRA 2015},
               month = {May},
               title = {Colias: towards an affordable mobile robot for education in developing countries},
              author = {Ernest Gyebi and Farshad Arvin and Marc Hanheide and Shigang Yue and Grzegorz Cielniak},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b7f8)},
                 url = {http://eprints.lincoln.ac.uk/17558/},
            abstract = {Educational robotics can play a key role in addressing some of the important challenges faced by higher education
    in developing countries. One of the major obstacles preventing a wider adoption of initiatives involving educational robotics in these parts of the world is a lack of robot platforms which would be affordable for the local educational institutions. In this paper, we present our inexpensive mobile robot platform Colias and assess its potential for education in developing countries. To this end, we describe hardware and software components of the robot, assess its suitability for education and discuss the missing features which will need to be developed to turn Colias into a fully featured educational platform. The presented robot is one of the key components of our current efforts in popularising
    educational robotics at African universities.}
    }
  • D. Hebesberger, T. Körtner, J. Pripfl, C. Gisinger, and M. Hanheide, “What do staff in eldercare want a robot for? An assessment of potential tasks and user requirements for a long-term deployment,” in IROS Workshop on "Bridging user needs to deployed applications of service robots", Hamburg, 2015.
    [BibTeX] [Abstract] [EPrints]

    Robotic aids could help to overcome the gap between rising numbers of older adults and at the same time declining numbers of care staff. Assessments of end-user requirements, especially focusing on staff in eldercare facilities are still sparse. Contributing to this field of research this study presents end-user requirements and task analysis gained from a methodological combination of interviews and focus group discussions. The findings suggest different tasks robots in eldercare could engage in such as ?fetch and carry? tasks, specific entertainment and information tasks, support in physical and occupational therapy, and in security. Furthermore this paper presents an iterative approach that closes the loop between requirements-assessments and subsequent implementations that follow the found requirements.

    @inproceedings{lirolem18860,
           booktitle = {IROS Workshop on "Bridging user needs to deployed applications of service robots"},
               month = {September},
               title = {What do staff in eldercare want a robot for? An assessment of potential tasks and user requirements for a long-term deployment},
              author = {Denise Hebesberger and Tobias K{\"o}rtner and J{\"u}rgen Pripfl and Christoph Gisinger and Marc Hanheide},
             address = {Hamburg},
                year = {2015},
                note = {The Robot-Era Project has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement num. 288899 
    FP7 - ICT - Challenge 5: ICT for Health, Ageing Well, Inclusion and Governance},
            keywords = {ARRAY(0x7fdc78166ce0)},
                 url = {http://eprints.lincoln.ac.uk/18860/},
            abstract = {Robotic aids could help to overcome the gap between rising numbers of older adults and at the same time declining numbers of care staff. Assessments of end-user requirements, especially focusing on staff in eldercare facilities are still sparse. Contributing to this field of research this study presents end-user requirements and task analysis gained from a methodological combination of interviews and focus group discussions. The findings suggest different tasks robots in eldercare could engage in such as ?fetch and carry? tasks, specific entertainment and information tasks, support in physical and occupational therapy, and in security. Furthermore this paper presents an iterative approach that closes the loop between requirements-assessments and subsequent implementations that follow the found requirements.}
    }
  • J. Kennedy, P. Baxter, and T. Belpaeme, “The robot who tried too hard: social behaviour of a robot tutor can negatively affect child learning,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction – HRI ’15, 2015, pp. 67-74.
    [BibTeX] [Abstract] [EPrints]

    Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.

    @inproceedings{lirolem24856,
           booktitle = {Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI '15},
               month = {March},
               title = {The robot who tried too hard: social behaviour of a robot tutor can negatively affect child learning},
              author = {James Kennedy and Paul Baxter and Tony Belpaeme},
           publisher = {ACM},
                year = {2015},
               pages = {67--74},
            keywords = {ARRAY(0x7fdc7816ba68)},
                 url = {http://eprints.lincoln.ac.uk/24856/},
            abstract = {Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.}
    }
  • J. Kennedy, P. Baxter, and T. Belpaeme, “Comparing robot embodiments in a guided discovery learning interaction with children,” International Journal of Social Robotics, vol. 7, iss. 2, pp. 293-308, 2015.
    [BibTeX] [Abstract] [EPrints]

    The application of social robots to the domain of education is becoming more prevalent. However, there re- main a wide range of open issues, such as the effectiveness of robots as tutors on student learning outcomes, the role of social behaviour in teaching interactions, and how the em- bodiment of a robot influences the interaction. In this paper, we seek to explore children?s behaviour towards a robot tutor for children in a novel guided discovery learning interac- tion. Since the necessity of real robots (as opposed to virtual agents) in education has not been definitively established in the literature, the effect of robot embodiment is assessed. The results demonstrate that children overcome strong incorrect biases in the material to be learned, but with no significant dif- ferences between embodiment conditions. However, the data do suggest that the use of real robots carries an advantage in terms of social presence that could provide educational benefits

    @article{lirolem23075,
              volume = {7},
              number = {2},
               month = {April},
              author = {James Kennedy and Paul Baxter and Tony Belpaeme},
               title = {Comparing robot embodiments in a guided discovery learning interaction with children},
           publisher = {Springer verlag},
                year = {2015},
             journal = {International Journal of Social Robotics},
               pages = {293--308},
            keywords = {ARRAY(0x7fdc7816b9d8)},
                 url = {http://eprints.lincoln.ac.uk/23075/},
            abstract = {The application of social robots to the domain of education is becoming more prevalent. However, there re- main a wide range of open issues, such as the effectiveness of robots as tutors on student learning outcomes, the role of social behaviour in teaching interactions, and how the em- bodiment of a robot influences the interaction. In this paper, we seek to explore children?s behaviour towards a robot tutor for children in a novel guided discovery learning interac- tion. Since the necessity of real robots (as opposed to virtual agents) in education has not been definitively established in the literature, the effect of robot embodiment is assessed. The results demonstrate that children overcome strong incorrect biases in the material to be learned, but with no significant dif- ferences between embodiment conditions. However, the data do suggest that the use of real robots carries an advantage in terms of social presence that could provide educational benefits}
    }
  • A. Kodzhabashev and M. Mangan, “Route following without scanning,” in Biomimetic and Biohybrid Systems: 4th International Conference, Living Machines 2015,, 2015, pp. 199-210.
    [BibTeX] [Abstract] [EPrints]

    Desert ants are expert navigators, foraging over large distances using visually guided routes. Recent models of route following can reproduce aspects of route guidance, yet the underlying motor patterns do not reflect those of foraging ants. Specifically, these models select the direction of movement by rotating to find the most familiar view. Yet scanning patterns are only occasionally observed in ants. We propose a novel route following strategy inspired by klinokinesis. By using familiarity of the view to modulate the magnitude of alternating left and right turns, and the size of forward steps, this strategy is able to continually correct the heading of a simulated ant to maintain its course along a route. Route following by klinokinesis and visual compass are evaluated against real ant routes in a simulation study and on a mobile robot in the real ant habitat. We report that in unfamiliar surroundings the proposed method can also generate ant-like scanning behaviours.

    @inproceedings{lirolem24845,
           booktitle = {Biomimetic and Biohybrid Systems: 4th International Conference, Living Machines 2015,},
               month = {July},
               title = {Route following without scanning},
              author = {Aleksandar Kodzhabashev and Michael Mangan},
           publisher = {Springer International Publishing},
                year = {2015},
               pages = {199--210},
            keywords = {ARRAY(0x7fdc78166dd0)},
                 url = {http://eprints.lincoln.ac.uk/24845/},
            abstract = {Desert ants are expert navigators, foraging over large distances using visually guided routes. Recent models of route following can reproduce aspects of route guidance, yet the underlying motor patterns do not reflect those of foraging ants. Specifically, these models select the direction of movement by rotating to find the most familiar view. Yet scanning patterns are only occasionally observed in ants. We propose a novel route following strategy inspired by klinokinesis. By using familiarity of the view to modulate the magnitude of alternating left and right turns, and the size of forward steps, this strategy is able to continually correct the heading of a simulated ant to maintain its course along a route. Route following by klinokinesis and visual compass are evaluated against real ant routes in a simulation study and on a mobile robot in the real ant habitat. We report that in unfamiliar surroundings the proposed method can also generate ant-like scanning behaviours.}
    }
  • T. Krajnik, J. P. Fentanes, J. Santos, K. Kusumam, and T. Duckett, “FreMEn: frequency map enhancement for long-term mobile robot autonomy in changing environments,” in ICRA 2015 Workshop on Visual Place Recognition in Changing Environments, 2015.
    [BibTeX] [Abstract] [EPrints]

    We present a method for introducing representation of dynamics into environment models that were originally tailored to represent static scenes. Rather than using a fixed probability value, the method models the uncertainty of the elementary environment states by probabilistic functions of time. These are composed of combinations of harmonic functions, which are obtained by means of frequency analysis. The use of frequency analysis allows to integrate long-term observations into memory-efficient spatio-temporal models that reflect the mid- to long-term environment dynamics. These frequency-enhanced spatio-temporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments. In a series of experiments performed over periods of days to years, we demonstrate that the proposed approach improves localization, path planning and exploration.

    @inproceedings{lirolem17953,
           booktitle = {ICRA 2015 Workshop on Visual Place Recognition in Changing Environments},
               month = {May},
               title = {FreMEn: frequency map enhancement for long-term mobile robot autonomy in changing environments},
              author = {Tomas Krajnik and Jaime Pulido Fentanes and Joao Santos and Keerthy Kusumam and Tom Duckett},
           publisher = {IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b8b8)},
                 url = {http://eprints.lincoln.ac.uk/17953/},
            abstract = {We present a method for introducing representation of dynamics into environment models that were originally tailored to represent static scenes. Rather than using a fixed probability value, the method models the uncertainty of the elementary environment states by probabilistic functions of time. These are composed of combinations of harmonic functions, which are obtained by means of frequency analysis. The use of frequency analysis allows to integrate long-term observations into memory-efficient spatio-temporal models that reflect the mid- to long-term environment dynamics. These frequency-enhanced spatio-temporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments.   In a series of experiments performed over periods of days to years, we demonstrate that the proposed approach improves localization, path planning and exploration.}
    }
  • T. Krajnik, J. Santos, and T. Duckett, “Life-long spatio-temporal exploration of dynamic environments,” in European Conference on Mobile Robots 2015 (ECMR 15), 2015.
    [BibTeX] [Abstract] [EPrints]

    We propose a new idea for life-long mobile robot spatio-temporal exploration of dynamic environments. Our method assumes that the world is subject to perpetual change, which adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process. To create and maintain a spatio-temporal model of a dynamic environment, the robot has to determine not only where, but also when to perform observations. We address the problem by application of information-theoretic exploration to world representations that model the uncertainty of environment states as probabilistic functions of time. We compare the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months and show that combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of constantly changing environments.

    @inproceedings{lirolem17955,
           booktitle = {European Conference on Mobile Robots 2015 (ECMR 15)},
               month = {September},
               title = {Life-long spatio-temporal exploration of dynamic environments},
              author = {Tomas Krajnik and Joao Santos and Tom Duckett},
           publisher = {IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166d70)},
                 url = {http://eprints.lincoln.ac.uk/17955/},
            abstract = {We propose a new idea for life-long mobile robot spatio-temporal exploration of dynamic environments.  Our method assumes that the world is subject to perpetual change, which adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process. To create and maintain a spatio-temporal model of a dynamic environment, the robot has to determine not only where, but also when to perform observations.  We address the problem by application of information-theoretic exploration to world representations that model the uncertainty of environment states as probabilistic functions of time.
    
    We compare the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months and show that combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of constantly changing environments.}
    }
  • T. Krajnik, F. Arvin, A. E. Turgut, S. Yue, and T. Duckett, “COS\ensuremath\Phi: Vision-based artificial pheromone system for robotic swarms,” in IEEE International Conference on Robotics and Automation (ICRA 2015), 2015.
    [BibTeX] [Abstract] [EPrints]

    We propose a novel spatio-temporal mobile-robot exploration method for dynamic, human-populated environments. In contrast to other exploration methods that model the environment as being static, our spatio-temporal exploration method creates and maintains a world model that not only represents the environment’s structure, but also its dynamics over time. Consideration of the world dynamics adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process to keep the robot’s environment model up-to-date. Thus, the crucial question is not only where, but also when to observe the explored environment. We address the problem by application of information-theoretic exploration to world representations that model the environment states’ uncertainties as probabilistic functions of time. The predictive ability of the spatio-temporal model allows the exploration method to decide not only where, but also when to make environment observations. To verify the proposed approach, an evaluation of several exploration strategies and spatio-temporal models was carried out using real-world data gathered over several months. The evaluation indicates that through understanding of the environment dynamics, the proposed spatio-temporal exploration method could predict which locations were going to change at a specific time and use this knowledge to guide the robot. Such an ability is crucial for long-term deployment of mobile robots in human-populated spaces that change over time.

    @inproceedings{lirolem17952,
           booktitle = {IEEE International Conference on Robotics and Automation (ICRA 2015)},
               month = {May},
               title = {COS{\ensuremath{\Phi}}: Vision-based artificial pheromone system for robotic swarms},
              author = {Tomas Krajnik and Farshad Arvin and Ali Emre Turgut and Shigang Yue and Tom Duckett},
           publisher = {IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b858)},
                 url = {http://eprints.lincoln.ac.uk/17952/},
            abstract = {We propose a novel spatio-temporal mobile-robot exploration method for dynamic, human-populated environments. In contrast to other exploration methods that model the environment as being static, our spatio-temporal exploration method creates and maintains a world model that not only represents the environment's structure, but also its dynamics over time.  Consideration of the world dynamics adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process to keep the robot's environment model up-to-date.
    Thus, the crucial question is not only where, but also when to observe the explored environment. 
    We address the problem by application of information-theoretic exploration to world representations that model the environment states' uncertainties as probabilistic functions of time. The predictive ability of the spatio-temporal model allows the exploration method to decide not only where, but also when to make environment observations. 
    
    To verify the proposed approach, an evaluation of several exploration strategies and spatio-temporal models was carried out using real-world data gathered over several months. The evaluation indicates that through understanding of the environment dynamics, the proposed spatio-temporal exploration method could predict which locations were going to change at a specific time and use this knowledge to guide the robot.  Such an ability is crucial for long-term deployment of mobile robots in human-populated spaces that change over time.}
    }
  • T. Krajnik, P. deCristoforis, M. Nitsche, K. Kusumam, and T. Duckett, “Image features and seasons revisited,” in European Conference on Mobile Robots 2015 (ECMR 15), 2015.
    [BibTeX] [Abstract] [EPrints]

    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat mobile robot navigation, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that in the given long-term scenario, the viewpoint, scale and rotation invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We evaluate the image feature extractors on three datasets collected by mobile robots in two different outdoor environments over the course of one year. Based on this analysis, we propose a novel feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the GRIEF feature descriptor outperforms the other ones while being computationally more efficient.

    @inproceedings{lirolem17954,
           booktitle = {European Conference on Mobile Robots 2015 (ECMR 15)},
               month = {September},
               title = {Image features and seasons revisited},
              author = {Tomas Krajnik and Pablo deCristoforis and Matias Nitsche and Keerthy Kusumam and Tom Duckett},
           publisher = {IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166da0)},
                 url = {http://eprints.lincoln.ac.uk/17954/},
            abstract = {We present an evaluation of standard image features in the context of long-term visual teach-and-repeat mobile robot navigation, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that in the given long-term scenario, the viewpoint, scale and rotation invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes.  We evaluate the image feature extractors on three datasets collected by mobile robots in two different outdoor environments over the course of one year. Based on this analysis, we propose a novel feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the GRIEF feature descriptor outperforms the other ones while being computationally more efficient.}
    }
  • T. Krajnik, M. Kulich, L. Mudrova, R. Ambrus, and T. Duckett, “Where’s Waldo at time t? Using spatio-temporal models for mobile robot search,” in IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 2140-2146.
    [BibTeX] [Abstract] [EPrints]

    We present a novel approach to mobile robot search for non-stationary objects in partially known environments. We formulate the search as a path planning problem in an environment where the probability of object occurrences at particular locations is a function of time. We propose to explicitly model the dynamics of the object occurrences by their frequency spectra. Using this spectral model, our path planning algorithm can construct plans that reflect the likelihoods of object locations at the time the search is performed. Three datasets collected over several months containing person and object occurrences in residential and office environments were chosen to evaluate the approach. Several types of spatio-temporal models were created for each of these datasets and the efficiency of the search method was assessed by measuring the time it took to locate a particular object. The results indicate that modeling the dynamics of object occurrences reduces the search time by 25\% to 65\% compared to maps that neglect these dynamics.

    @inproceedings{lirolem17949,
           booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
               month = {May},
               title = {Where's Waldo at time t? Using spatio-temporal models for mobile robot search},
              author = {Tomas Krajnik and Miroslav Kulich and Lenka Mudrova and Rares Ambrus and Tom Duckett},
           publisher = {Institute of Electrical and Electronics Engineers},
                year = {2015},
               pages = {2140--2146},
            keywords = {ARRAY(0x7fdc7816b888)},
                 url = {http://eprints.lincoln.ac.uk/17949/},
            abstract = {We present a novel approach to mobile robot search for non-stationary objects in partially known environments. We formulate the search as a path planning problem in an environment where the probability of object occurrences at particular locations is a function of time. We propose to explicitly model the dynamics of the object occurrences by their frequency spectra. Using this spectral model, our path planning algorithm can construct plans that reflect the likelihoods of object locations at the time the search is performed. Three datasets collected over several months containing person and object occurrences in residential and office environments were chosen to evaluate the approach. Several types of spatio-temporal models were created for each of these datasets and the efficiency of the search method was assessed by measuring the time it took to locate a particular object. The results indicate that modeling the dynamics of object occurrences reduces the search time by 25\% to 65\% compared to maps that neglect these dynamics.}
    }
  • H. Li, J. Peng, and S. Yue, “The sparsity of underdetermined linear system via lp minimization for 0 \ensuremath< p \ensuremath< 1,” Mathematical Problems in Engineering, vol. 2015, 2015.
    [BibTeX] [Abstract] [EPrints]

    The sparsity problems have attracted a great deal of attention in recent years, which aim to find the sparsest solution of a representation or an equation. In the paper, we mainly study the sparsity of underdetermined linear system via lp minimization for 0\ensuremath<p\ensuremath<1. We show, for a given underdetermined linear system of equations pm$\times$np = p, that although it is not certain that the problem (pp) (i.e., minlx\ensuremath|\ensuremath|X\ensuremath|\ensuremath|plp subject to pp = b, where 0\ensuremath<p\ensuremath<1 ) generates sparser solutions as the value of p decreases and especially the problem (plp) generates sparser solutions than the problem (p1) (i.e., minlx\ensuremath|\ensuremath|X\ensuremath|\ensuremath|1 subject to AX = b ), there exists a sparse constant \ensuremath\gamma(A, p) \ensuremath> 0 such that the following conclusions hold when p \ensuremath< \ensuremath\gamma(A, b): (1) the problem (pp) generates sparser solution as the value of p decreases; (2) the sparsest optimal solution to the problem (pp) is unique under the sense of absolute value permutation; (3) let X1 and X2 be the sparsest optimal solution to the problems (pp1) and (pp2) , respectively, and let X1 not be the absolute value permutation of X2. Then there exist t1,t2 \ensuremath\epsilon [p1,p2] such that X1 is the sparsest optimal solution to the problem (pt) (?t \ensuremath\epsilon [p1, t1]) and X2 is the sparsest optimal solution to the problem (pt) (?t \ensuremath\epsilon (t2, p2]).

    @article{lirolem17577,
              volume = {2015},
               month = {June},
              author = {Haiyang Li and Jigen Peng and Shigang Yue},
                note = {Article ID 584712, 6 pages},
               title = {The sparsity of underdetermined linear system via lp minimization for 0 {\ensuremath{<}} p {\ensuremath{<}} 1},
           publisher = {Hindawi Publishing Corporation},
             journal = {Mathematical Problems in Engineering},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b798)},
                 url = {http://eprints.lincoln.ac.uk/17577/},
            abstract = {The sparsity problems have attracted a great deal of attention in recent years, which aim to find the sparsest solution of a representation or an equation. In the paper, we mainly study the sparsity of underdetermined linear system via lp minimization for 0{\ensuremath{<}}p{\ensuremath{<}}1. We show, for a given underdetermined linear system of equations pm{$\times$}np = p, that although it is not certain that the problem (pp) (i.e., minlx{\ensuremath{|}}{\ensuremath{|}}X{\ensuremath{|}}{\ensuremath{|}}plp subject to pp = b, where  0{\ensuremath{<}}p{\ensuremath{<}}1 ) generates sparser solutions as the value of p decreases and especially the problem (plp) generates sparser solutions than the problem (p1) (i.e., minlx{\ensuremath{|}}{\ensuremath{|}}X{\ensuremath{|}}{\ensuremath{|}}1 subject to AX = b ), there exists a sparse constant {\ensuremath{\gamma}}(A, p) {\ensuremath{>}} 0 such that the following conclusions hold when p {\ensuremath{<}} {\ensuremath{\gamma}}(A, b): (1) the problem (pp) generates sparser solution as the value of p decreases; (2) the sparsest optimal solution to the problem (pp) is unique under the sense of absolute value permutation; (3) let X1 and X2 be the sparsest optimal solution to the problems (pp1) and (pp2) , respectively, and let  X1 not be the absolute value permutation of  X2. Then there exist t1,t2 {\ensuremath{\epsilon}} [p1,p2]  such that X1 is the sparsest optimal solution to the problem (pt) (?t {\ensuremath{\epsilon}} [p1, t1])  and X2 is the sparsest optimal solution to the problem (pt) (?t {\ensuremath{\epsilon}} (t2, p2]).}
    }
  • P. Lightbody, C. Dondrup, and M. Hanheide, “Make me a sandwich! Intrinsic human identification from their course of action,” in Towards a Framework for Joint Action, 2015.
    [BibTeX] [Abstract] [EPrints]

    In order to allow humans and robots to work closely together and as a team, we need to equip robots not only with a general understanding of joint action, but also with an understanding of the idiosyncratic differences in the ways humans perform certain tasks. This will allow robots to be better colleagues, by anticipating an individual’s actions, and acting accordingly. In this paper, we present a way of encoding a human’s course of action as a probabilistic sequence of qualitative states, and show that such a model can be employed to identify individual humans from their respective course of action, even when accomplishing the very same goal state. We conclude from our findings that there are significant variations in the ways humans accomplish the very same task, and that our representation could in future work inform robot (task) planning in collaborative settings.

    @inproceedings{lirolem19696,
           booktitle = {Towards a Framework for Joint Action},
               month = {October},
               title = {Make me a sandwich! Intrinsic human identification from their course of action},
              author = {Peter Lightbody and Christian Dondrup and Marc Hanheide},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166bf0)},
                 url = {http://eprints.lincoln.ac.uk/19696/},
            abstract = {In order to allow humans and robots to work closely together and as a team, we need to equip robots not only with a general understanding of joint action, but also with an understanding of the idiosyncratic differences in the ways humans perform certain tasks. This will allow robots to be better colleagues, by anticipating an individual's actions, and acting accordingly. In this paper, we present a way of encoding a human's course of action as a probabilistic sequence of qualitative states, and show that such a model can be employed to identify individual humans from their respective course of action, even when accomplishing the very same goal state. We conclude from our findings that there are significant variations in the ways humans accomplish the very same task, and that our representation could in future work inform robot (task) planning in collaborative settings.}
    }
  • N. Mavridis, N. Bellotto, K. Iliopoulos, and N. V. de Weghe, “QTC3D: extending the qualitative trajectory calculus to three dimensions,” Information Sciences, vol. 322, pp. 20-30, 2015.
    [BibTeX] [Abstract] [EPrints]

    Spatial interactions between agents (humans, animals, or machines) carry information of high value to human or electronic observers. However, not all the information contained in a pair of continuous trajectories is important and thus the need for qualitative descriptions of interaction trajectories arises. The Qualitative Trajectory Calculus (QTC) (Van de Weghe, 2004) is a promising development towards this goal. Numerous variants of QTC have been proposed in the past and QTC has been applied towards analyzing various interaction domains. However, an inherent limitation of those QTC variations that deal with lateral movements is that they are limited to two-dimensional motion; therefore, complex three-dimensional interactions, such as those occurring between flying planes or birds, cannot be captured. Towards that purpose, in this paper QTC3D is presented: a novel qualitative trajectory calculus that can deal with full three-dimensional interactions. QTC3D is based on transformations of the Frenet-Serret frames accompanying the trajectories of the moving objects. Apart from the theoretical exposition, including definition and properties, as well as computational aspects, we also present an application of QTC3D towards modeling bird flight. Thus, the power of QTC is now extended to the full dimensionality of physical space, enabling succinct yet rich representations of spatial interactions between agents.

    @article{lirolem17596,
              volume = {322},
               month = {November},
              author = {Nikolaos Mavridis and Nicola Bellotto and Konstantinos Iliopoulos and Nico Van de Weghe},
               title = {QTC3D: extending the qualitative trajectory calculus to three dimensions},
           publisher = {Elsevier},
             journal = {Information Sciences},
               pages = {20--30},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166b60)},
                 url = {http://eprints.lincoln.ac.uk/17596/},
            abstract = {Spatial interactions between agents (humans, animals, or machines) carry information of high value to human or electronic observers. However, not all the information contained in a pair of continuous trajectories is important and thus the need for qualitative descriptions of interaction trajectories arises. The Qualitative Trajectory Calculus (QTC) (Van de Weghe, 2004) is a promising development towards this goal. Numerous variants of QTC have been proposed in the past and QTC has been applied towards analyzing various interaction domains. However, an inherent limitation of those QTC variations that deal with lateral movements is that they are limited to two-dimensional motion; therefore, complex three-dimensional interactions, such as those occurring between flying planes or birds, cannot be captured. Towards that purpose, in this paper QTC3D is presented: a novel qualitative trajectory calculus that can deal with full three-dimensional interactions. QTC3D is based on transformations of the Frenet-Serret frames accompanying the trajectories of the moving objects. Apart from the theoretical exposition, including definition and properties, as well as computational aspects, we also present an application of QTC3D towards modeling bird flight. Thus, the power of QTC is now extended to the full dimensionality of physical space, enabling succinct yet rich representations of spatial interactions between agents.}
    }
  • M. Milford, H. Kim, M. Mangan, S. Leutenegger, T. Stone, B. Webb, and A. Davison, “Place recognition with event-based cameras and a neural implementation of SeqSLAM,” OALib Journal, 2015.
    [BibTeX] [Abstract] [EPrints]

    Event-based cameras (Figure 1) offer much potential to the fields of robotics and computer vision, in part due to their large dynamic range and extremely high ?frame rates?. These attributes make them, at least in theory, particularly suitable for enabling tasks like navigation and mapping on high speed robotic platforms under challenging lighting conditions, a task which has been particularly challenging for traditional algorithms and camera sensors. Before these tasks become feasible however, progress must be made towards adapting and innovating current RGB-camera-based algorithms to work with eventbased cameras. In this paper we present ongoing research investigating two distinct approaches to incorporating event-based cameras for robotic navigation: 1. The investigation of suitable place recognition / loop closure techniques, and 2. The development of efficient neural implementations of place recognition techniques that enable the possibility of place recognition using event-based cameras at very high frame rates using neuromorphic computing hardware. Figure 1: The first commercial event camera: (a) DVS128; (b) a stream of events (upward and downward spikes: positive and negative events); (c) image-like visualisation of accumulated events within a time interval (white and black: positive and negative events). From (H. Kim, 2014)].

    @article{lirolem23587,
               title = {Place recognition with event-based cameras and a neural implementation of SeqSLAM},
              author = {Michael Milford and Hanme Kim and Michael Mangan and Stefan Leutenegger and Tom Stone and Barbara Webb and Andrew Davison},
           publisher = {Open Access Library},
                year = {2015},
                note = {arXiv preprint arXiv:1505.04548},
             journal = {OALib Journal},
            keywords = {ARRAY(0x7fdc7816bb28)},
                 url = {http://eprints.lincoln.ac.uk/23587/},
            abstract = {Event-based cameras (Figure 1) offer much potential to the fields of robotics and computer
    vision, in part due to their large dynamic range and extremely high ?frame rates?. These
    attributes make them, at least in theory, particularly suitable for enabling tasks like
    navigation and mapping on high speed robotic platforms under challenging lighting
    conditions, a task which has been particularly challenging for traditional algorithms and
    camera sensors. Before these tasks become feasible however, progress must be made
    towards adapting and innovating current RGB-camera-based algorithms to work with eventbased
    cameras. In this paper we present ongoing research investigating two distinct
    approaches to incorporating event-based cameras for robotic navigation:
    1. The investigation of suitable place recognition / loop closure techniques, and
    2. The development of efficient neural implementations of place recognition
    techniques that enable the possibility of place recognition using event-based
    cameras at very high frame rates using neuromorphic computing hardware.
    Figure 1: The first commercial event camera: (a) DVS128; (b) a stream of events (upward and
    downward spikes: positive and negative events); (c) image-like visualisation of accumulated
    events within a time interval (white and black: positive and negative events). From (H. Kim,
    2014)].}
    }
  • M. Nitsche, T. Krajnik, P. Cizek, M. Mejail, and T. Duckett, “WhyCon: an efficient, marker-based localization system,” in IROS Workshop on Aerial Open-source Robotics, 2015.
    [BibTeX] [Abstract] [EPrints]

    We present an open-source marker-based localization system intended as a low-cost easy-to-deploy solution for aerial and swarm robotics. The main advantage of the presented method is its high computational efficiency, which allows its deployment on small robots with limited computational resources. Even on low-end computers, the core component of the system can detect and estimate 3D positions of hundreds of black and white markers at the maximum frame-rate of standard cameras. The method is robust to changing lighting conditions and achieves accuracy in the order of millimeters to centimeters. Due to its reliability, simplicity of use and availability as an open-source ROS module (http://purl.org/robotics/whycon), the system is now used in a number of aerial robotics projects where fast and precise relative localization is required.

    @inproceedings{lirolem18877,
           booktitle = {IROS Workshop on Aerial Open-source Robotics},
               month = {September},
               title = {WhyCon: an efficient, marker-based localization system},
              author = {Matias Nitsche and Tomas Krajnik and Petr Cizek and Marta Mejail and Tom Duckett},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166d10)},
                 url = {http://eprints.lincoln.ac.uk/18877/},
            abstract = {We present an open-source marker-based localization system intended as a low-cost easy-to-deploy solution for aerial and swarm robotics. The main advantage of the presented method is its high computational efficiency, which allows its deployment on small robots with limited computational resources. Even on low-end computers, the core component of the system can detect and estimate 3D positions of hundreds of black and white markers at the maximum frame-rate of standard cameras. The method is robust to changing lighting conditions and achieves accuracy in the order of millimeters to centimeters. Due to its reliability, simplicity of use and availability as an open-source ROS module (http://purl.org/robotics/whycon), the system is now used in a number of aerial robotics projects where fast and precise relative localization is required.}
    }
  • J. Peng, S. Yue, and H. Li, “NP/CMP equivalence: a phenomenon hidden among sparsity models l\_\0\ minimization and l\_\p\ minimization for information processing,” IEEE Transactions on Information Theory, vol. 61, iss. 7, pp. 4028-4033, 2015.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we have proved that in every underdetermined linear system Ax = b, there corresponds a constant p*(A, b) \ensuremath> 0 such that every solution to the l p-norm minimization problem also solves the l0-norm minimization problem whenever 0 \ensuremath<; p \ensuremath<; p*(A, b). This phenomenon is named NP/CMP equivalence.

    @article{lirolem17877,
              volume = {61},
              number = {7},
               month = {June},
              author = {Jigen Peng and Shigang Yue and Haiyang Li},
               title = {NP/CMP equivalence: a phenomenon hidden among sparsity models l\_\{0\} minimization and l\_\{p\} minimization for information processing},
           publisher = {IEEE},
                year = {2015},
             journal = {IEEE Transactions on Information Theory},
               pages = {4028--4033},
            keywords = {ARRAY(0x7fdc78166e90)},
                 url = {http://eprints.lincoln.ac.uk/17877/},
            abstract = {In this paper, we have proved that in every underdetermined linear system Ax = b, there corresponds a constant p*(A, b) {\ensuremath{>}} 0 such that every solution to the l p-norm minimization problem also solves the l0-norm minimization problem whenever 0 {\ensuremath{<}}; p {\ensuremath{<}}; p*(A, b). This phenomenon is named NP/CMP equivalence.}
    }
  • V. Sandulescu, S. Andrews, D. Ellis, N. Bellotto, and O. M. Mozos, “Stress detection using wearable physiological sensors,” Lecture Notes in Computer Science, vol. 9107, pp. 526-532, 2015.
    [BibTeX] [Abstract] [EPrints]

    As the population increases in the world, the ratio of health carers is rapidly decreasing. Therefore, there is an urgent need to create new technologies to monitor the physical and mental health of people during their daily life. In particular, negative mental states like depression and anxiety are big problems in modern societies, usually due to stressful situations during everyday activities including work. This paper presents a machine learning approach for stress detection on people using wearable physiological sensors with the ?final aim of improving their quality of life. The presented technique can monitor the state of the subject continuously and classify it into "stressful" or "non-stressful" situations. Our classification results show that this method is a good starting point towards real-time stress detection.

    @article{lirolem17143,
              volume = {9107},
               month = {June},
              author = {Virginia Sandulescu and Sally Andrews and David Ellis and Nicola Bellotto and Oscar Martinez Mozos},
                note = {Series: Lecture Notes in Computer Science
    Artificial Computation in Biology and Medicine: International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2015, Elche, Spain, June 1-5, 2015, Proceedings, Part I},
               title = {Stress detection using wearable physiological sensors},
           publisher = {Springer verlag},
                year = {2015},
             journal = {Lecture Notes in Computer Science},
               pages = {526--532},
            keywords = {ARRAY(0x7fdc7816b7c8)},
                 url = {http://eprints.lincoln.ac.uk/17143/},
            abstract = {As the population increases in the world, the ratio of health carers is rapidly decreasing. Therefore, there is an urgent need to create new technologies to monitor the physical and mental health of people during their  daily life. In particular, negative mental states like depression and anxiety are big problems in modern societies, usually due to stressful situations during everyday activities including work. This paper presents a machine learning approach for stress detection on people using wearable physiological sensors with the ?final aim of improving their quality of life. The presented technique can monitor the state of the subject continuously and classify it into "stressful" or "non-stressful" situations. Our classification results show that this method is a good starting point towards real-time stress detection.}
    }
  • J. Santos, T. Krajnik, J. P. Fentanes, and T. Duckett, “Lifelong exploration of dynamic environments,” in IEEE International Conference on Robotics and Automation (ICRA), 2015.
    [BibTeX] [Abstract] [EPrints]

    We propose a novel spatio-temporal mobile-robot exploration method for dynamic, human-populated environments. In contrast to other exploration methods that model the environment as being static, our spatio-temporal exploration method creates and maintains a world model that not only represents the environment’s structure, but also its dynamics over time. Consideration of the world dynamics adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process to keep the robot’s environment model up-to-date. Thus, the crucial question is not only where, but also when to observe the explored environment. We address the problem by application of information-theoretic exploration to world representations that model the environment states’ uncertainties as probabilistic functions of time. The predictive ability of the spatio-temporal model allows the exploration method to decide not only where, but also when to make environment observations. To verify the proposed approach, an evaluation of several exploration strategies and spatio-temporal models was carried out using real-world data gathered over several months. The evaluation indicates that through understanding of the environment dynamics, the proposed spatio-temporal exploration method could predict which locations were going to change at a specific time and use this knowledge to guide the robot. Such an ability is crucial for long-term deployment of mobile robots in human-populated spaces that change over time.

    @inproceedings{lirolem17951,
           booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
               month = {May},
               title = {Lifelong exploration of dynamic environments},
              author = {Joao Santos and Tomas Krajnik and Jaime Pulido Fentanes and Tom Duckett},
           publisher = {IEEE},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b918)},
                 url = {http://eprints.lincoln.ac.uk/17951/},
            abstract = {We propose a novel spatio-temporal mobile-robot exploration method for dynamic, human-populated environments.
    In contrast to other exploration methods that model the environment as being static, our spatio-temporal exploration method creates and maintains a world model that not only represents the environment's structure, but also its dynamics over time. Consideration of the world dynamics adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process to keep the robot's environment model up-to-date. Thus, the crucial question is not only where, but also when to observe the explored environment. We address the problem by application of information-theoretic exploration to world representations that model the environment states' uncertainties as probabilistic functions of time. The predictive ability of the spatio-temporal model allows the exploration method to decide not only where, but also when to make environment observations.
    
    To verify the proposed approach, an evaluation of several exploration strategies and spatio-temporal models was carried out using real-world data gathered over several months. The evaluation indicates that through understanding of the environment dynamics, the proposed spatio-temporal exploration method could predict which locations were going to change at a specific time and use this knowledge to guide the robot. Such an ability is crucial for long-term deployment of mobile robots in human-populated spaces that change over time.}
    }
  • D. Wang, S. Yue, J. Xu, X. Hou, and C. Liu, “A saliency-based cascade method for fast traffic sign detection,” in Intelligent Vehicles Symposium, IV 2015, 2015, pp. 180-185.
    [BibTeX] [Abstract] [EPrints]

    We propose a cascade method for fast and accurate traffic sign detection. The main feature of the method is that mid-level saliency test is used to efficiently and reliably eliminate background windows. Fast feature extraction is adopted in the subsequent stages for rejecting more negatives. Combining with neighbor scales awareness in window search, the proposed method runs at 3\texttt\char1265 fps for high resolution (1360×800) images, 2\texttt\char1267 times as fast as most state-of-the-art methods. Compared with them, the proposed method yields competitive performance on prohibitory signs while sacrifices performance moderately on danger and mandatory signs. \copyright 2015 IEEE.

    @inproceedings{lirolem20151,
              volume = {2015-A},
               month = {July},
              author = {Dongdong Wang and Shigang Yue and Jiawei Xu and Xinwen Hou and Cheng-Lin Liu},
                note = {Conference Code:117127},
           booktitle = {Intelligent Vehicles Symposium, IV 2015},
               title = {A saliency-based cascade method for fast traffic sign detection},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
                year = {2015},
             journal = {IEEE Intelligent Vehicles Symposium, Proceedings},
               pages = {180--185},
            keywords = {ARRAY(0x7fdc78166e30)},
                 url = {http://eprints.lincoln.ac.uk/20151/},
            abstract = {We propose a cascade method for fast and accurate traffic sign detection. The main feature of the method is that mid-level saliency test is used to efficiently and reliably eliminate background windows. Fast feature extraction is adopted in the subsequent stages for rejecting more negatives. Combining with neighbor scales awareness in window search, the proposed method runs at 3{\texttt{\char126}}5 fps for high resolution (1360x800) images, 2{\texttt{\char126}}7 times as fast as most state-of-the-art methods. Compared with them, the proposed method yields competitive performance on prohibitory signs while sacrifices performance moderately on danger and mandatory signs. {\copyright} 2015 IEEE.}
    }
  • A. Wystrach, M. Mangan, and B. Webb, “Optimal cue integration in ants,” Proceedings of the Royal Society B: Biological Sciences, vol. 282, iss. 1816, 2015.
    [BibTeX] [Abstract] [EPrints]

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.

    @article{lirolem23589,
              volume = {282},
              number = {1816},
               month = {October},
              author = {Antoine Wystrach and Michael Mangan and Barbara Webb},
               title = {Optimal cue integration in ants},
           publisher = {Royal Society},
             journal = {Proceedings of the Royal Society B: Biological Sciences},
                year = {2015},
            keywords = {ARRAY(0x7fdc78166c20)},
                 url = {http://eprints.lincoln.ac.uk/23589/},
            abstract = {In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.}
    }
  • J. Xu and S. Yue, “Building up a bio-inspired visual attention model by integrating top-down shape bias and improved mean shift adaptive segmentation,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 29, iss. 4, 2015.
    [BibTeX] [Abstract] [EPrints]

    The driver-assistance system (DAS) becomes quite necessary in-vehicle equipment nowadays due to the large number of road traffic accidents worldwide. An efficient DAS detecting hazardous situations robustly is key to reduce road accidents. The core of a DAS is to identify salient regions or regions of interest relevant to visual attended objects in real visual scenes for further process. In order to achieve this goal, we present a method to locate regions of interest automatically based on a novel adaptive mean shift segmentation algorithm to obtain saliency objects. In the proposed mean shift algorithm, we use adaptive Bayesian bandwidth to find the convergence of all data points by iterations and the k-nearest neighborhood queries. Experiments showed that the proposed algorithm is efficient, and yields better visual salient regions comparing with ground-truth benchmark. The proposed algorithm continuously outperformed other known visual saliency methods, generated higher precision and better recall rates, when challenged with natural scenes collected locally and one of the largest publicly available data sets. The proposed algorithm can also be extended naturally to detect moving vehicles in dynamic scenes once integrated with top-down shape biased cues, as demonstrated in our experiments. Â\copyright 2015 World Scientific Publishing Company.

    @article{lirolem20639,
              volume = {29},
              number = {4},
               month = {June},
              author = {Jiawei Xu and Shigang Yue},
               title = {Building up a bio-inspired visual attention model by integrating top-down shape bias and improved mean shift adaptive segmentation},
           publisher = {World Scientific Publishing Co. Pte Ltd},
             journal = {International Journal of Pattern Recognition and Artificial Intelligence},
                year = {2015},
            keywords = {ARRAY(0x7fdc7816b708)},
                 url = {http://eprints.lincoln.ac.uk/20639/},
            abstract = {The driver-assistance system (DAS) becomes quite necessary in-vehicle equipment nowadays due to the large number of road traffic accidents worldwide. An efficient DAS detecting hazardous situations robustly is key to reduce road accidents. The core of a DAS is to identify salient regions or regions of interest relevant to visual attended objects in real visual scenes for further process. In order to achieve this goal, we present a method to locate regions of interest automatically based on a novel adaptive mean shift segmentation algorithm to obtain saliency objects. In the proposed mean shift algorithm, we use adaptive Bayesian bandwidth to find the convergence of all data points by iterations and the k-nearest neighborhood queries. Experiments showed that the proposed algorithm is efficient, and yields better visual salient regions comparing with ground-truth benchmark. The proposed algorithm continuously outperformed other known visual saliency methods, generated higher precision and better recall rates, when challenged with natural scenes collected locally and one of the largest publicly available data sets. The proposed algorithm can also be extended naturally to detect moving vehicles in dynamic scenes once integrated with top-down shape biased cues, as demonstrated in our experiments. {\^A}{\copyright} 2015 World Scientific Publishing Company.}
    }
  • Z. Zhang, S. Yue, and G. Zhang, “Fly visual system inspired artificial neural network for collision detection,” Neurocomputing, vol. 153, iss. 4, pp. 221-234, 2015.
    [BibTeX] [Abstract] [EPrints]

    This work investigates one bio-inspired collision detection system based on fly visual neural structures, in which collision alarm is triggered if an approaching object in a direct collision course appears in the field of view of a camera or a robot, together with the relevant time region of collision. One such artificial system consists of one artificial fly visual neural network model and one collision detection mechanism. The former one is a computational model to capture membrane potentials produced by neurons. The latter one takes the outputs of the former one as its inputs, and executes three detection schemes: (i) identifying when a spike takes place through the membrane potentials and one threshold scheme; (ii) deciding the motion direction of a moving object by the Reichardt detector model; and (iii) sending collision alarms and collision regions. Experimentally, relying upon a series of video image sequences with different scenes, numerical results illustrated that the artificial system with some striking characteristics is a potentially alternative tool for collision detection.

    @article{lirolem17881,
              volume = {153},
              number = {4},
               month = {April},
              author = {Zhuhong Zhang and Shigang Yue and Guopeng Zhang},
               title = {Fly visual system inspired artificial neural network for collision detection},
           publisher = {Elsevier},
                year = {2015},
             journal = {Neurocomputing},
               pages = {221--234},
            keywords = {ARRAY(0x7fdc7816b9a8)},
                 url = {http://eprints.lincoln.ac.uk/17881/},
            abstract = {This work investigates one bio-inspired collision detection system based on fly visual neural structures, in which collision alarm is triggered if an approaching object in a direct collision course appears in the field of view of a camera or a robot, together with the relevant time region of collision. One such artificial system consists of one artificial fly visual neural network model and one collision detection mechanism. The former one is a computational model to capture membrane potentials produced by neurons. The latter one takes the outputs of the former one as its inputs, and executes three detection schemes: (i) identifying when a spike takes place through the membrane potentials and one threshold scheme; (ii) deciding the motion direction of a moving object by the Reichardt detector model; and (iii) sending collision alarms and collision regions. Experimentally, relying upon a series of video image sequences with different scenes, numerical results illustrated that the artificial system with some striking characteristics is a potentially alternative tool for collision detection.}
    }

2014

  • F. Arvin, A. E. Turgut, N. Bellotto, and S. Yue, “Comparison of different cue-based swarm aggregation strategies,” in International Conference in Swarm Intelligence, 2014, pp. 1-8.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we compare different aggregation strategies for cue-based aggregation with a mobile robot swarm. We used a sound source as the cue in the environment and performed real robot and simulation based experiments. We compared the performance of two proposed aggregation algorithms we called as the vector averaging and naïve with the state-of-the-art cue-based aggregation strategy BEECLUST. We showed that the proposed strategies outperform BEECLUST method. We also illustrated the feasibility of the method in the presence of noise. The results showed that the vector averaging algorithm is more robust to noise when compared to the naïve method.

    @inproceedings{lirolem14927,
               month = {October},
              author = {Farshad Arvin and Ali Emre Turgut and Nicola Bellotto and Shigang Yue},
                note = {Proceedings, Part I, series volume 8794},
           booktitle = {International Conference in Swarm Intelligence},
               title = {Comparison of different cue-based swarm aggregation strategies},
           publisher = {Springer},
               pages = {1--8},
                year = {2014},
            keywords = {ARRAY(0x7fdc7816bcd8)},
                 url = {http://eprints.lincoln.ac.uk/14927/},
            abstract = {In this paper, we compare different aggregation strategies for cue-based aggregation with a mobile robot swarm. We used a sound source as the cue in the environment and performed real robot and simulation based experiments. We compared the performance of two proposed aggregation algorithms we called as the vector averaging and na{\"i}ve with the state-of-the-art cue-based aggregation strategy BEECLUST. We showed that the proposed strategies outperform BEECLUST method. We also illustrated the feasibility of the method in the presence of noise. The results showed that the vector averaging algorithm is more robust to noise when compared to the na{\"i}ve method.}
    }
  • F. Arvin, J. Murray, L. Shi, C. Zhang, and S. Yue, “Development of an autonomous micro robot for swarm robotics,” in IEEE International Conference on Mechatronics and Automation (ICMA), 2014, pp. 635-640.
    [BibTeX] [Abstract] [EPrints]

    Swarm robotic systems which are inspired from social behaviour of animals especially insects are becoming a fascinating topic for multi-robot researchers. Simulation software is mostly used for performing research in swarm robotics due the hardware complexities and cost of robot platforms. However, simulation of large numbers of these swarm robots is extremely complex and often inaccurate. In this paper we present the design of a low-cost, open-platform, autonomous micro robot (Colias) for swarm robotic applications. Colias uses a circular platform with a diameter of 4 cm. Long-range infrared modules with adjustable output power allow the robot to communicate with its direct neighbours. The robot has been tested in individual and swarm scenarios and the observed results demonstrate its feasibility to be used as a micro sized mobile robot as well as a low-cost platform for robot swarm applications.

    @inproceedings{lirolem14837,
           booktitle = {IEEE International Conference on Mechatronics and Automation (ICMA)},
               month = {August},
               title = {Development of an autonomous micro robot for swarm robotics},
              author = {Farshad Arvin and John Murray and Licheng Shi and Chun Zhang and Shigang Yue},
           publisher = {IEEE},
                year = {2014},
               pages = {635--640},
            keywords = {ARRAY(0x7fdc781a6740)},
                 url = {http://eprints.lincoln.ac.uk/14837/},
            abstract = {Swarm robotic systems which are inspired from social behaviour of animals especially insects are becoming a fascinating topic for multi-robot researchers. Simulation software is mostly used for performing research in swarm robotics due the hardware complexities and cost of robot platforms. However, simulation of large numbers of these swarm robots is extremely complex and often inaccurate. In this paper we present the design of a low-cost, open-platform, autonomous micro robot (Colias) for swarm robotic applications. Colias uses a circular platform with a diameter of 4 cm. Long-range infrared modules with adjustable output power allow the robot to communicate with its direct neighbours. The robot has been tested in individual and swarm scenarios and the observed results demonstrate its feasibility to be used as a micro sized mobile robot as well as a low-cost platform for robot swarm applications.}
    }
  • F. Arvin, J. Murray, C. Zhang, and S. Yue, “Colias: an autonomous micro robot for swarm robotic applications,” International Journal of Advanced Robotic Systems, vol. 11, iss. 113, pp. 1-10, 2014.
    [BibTeX] [Abstract] [EPrints]

    Robotic swarms that take inspiration from nature are becoming a fascinating topic for multi-robot researchers. The aim is to control a large number of simple robots enables them in order to solve common complex tasks. Due to the hardware complexities and cost of robot platforms, current research in swarm robotics is mostly performed by simulation software. Simulation of large numbers of these robots which are used in swarm robotic applications is extremely complex and often inaccurate due to poor modelling of external conditions. In this paper we present the design of a low-cost, open-platform, autonomous micro robot (Colias) for swarm robotic applications. Colias employs a circular platform with a diameter of 4 cm. It has a maximum speed of 35 cm/s that gives the ability to be used in swarm scenarios very quickly in large arenas. Long-range infrared modules with adjustable output power allow the robot to communicate with its direct neighbours from a range of 0.5 cm to 3 m. Colias has been designed as a complete platform with supporting software development tools for robotics education and research. It has been tested in individual and swarm scenarios and the observed results demonstrate its feasibility to be used as a micro sized mobile robot as well as a low-cost platform for robot swarm applications.

    @article{lirolem14585,
              volume = {11},
              number = {113},
               month = {July},
              author = {Farshad Arvin and John Murray and Chun Zhang and Shigang Yue},
               title = {Colias: an autonomous micro robot for swarm robotic applications},
           publisher = {InTech},
                year = {2014},
             journal = {International Journal of Advanced Robotic Systems},
               pages = {1--10},
            keywords = {ARRAY(0x7fdc780ad9b0)},
                 url = {http://eprints.lincoln.ac.uk/14585/},
            abstract = {Robotic swarms that take inspiration from nature are becoming a fascinating topic for multi-robot researchers. The aim is to control a large number of simple robots enables them in order to solve common complex tasks. Due to the hardware complexities and cost of robot platforms, current research in swarm robotics is mostly performed by simulation software. Simulation of large numbers of these robots which are used in swarm robotic applications is extremely complex and often inaccurate due to poor modelling of external conditions. In this paper we present the design of a low-cost, open-platform, autonomous micro robot (Colias) for swarm robotic applications. Colias employs a circular platform with a diameter of 4 cm. It has a maximum speed of 35 cm/s that gives the ability to be used in swarm scenarios very quickly in large arenas. Long-range infrared modules with adjustable output power allow the robot to communicate with its direct neighbours from a range of 0.5 cm to 3 m. Colias has been designed as a complete platform with supporting software development tools for robotics education and research. It has been tested in individual and swarm scenarios and the observed results demonstrate its feasibility to be used as a micro sized mobile robot as well as a low-cost platform for robot swarm applications.}
    }
  • F. Arvin, A. E. Turgut, F. Bazyari, K. B. Arikan, N. Bellotto, and S. Yue, “Cue-based aggregation with a mobile robot swarm: a novel fuzzy-based method,” Adaptive Behavior, vol. 22, iss. 3, pp. 189-206, 2014.
    [BibTeX] [Abstract] [EPrints]

    Aggregation in swarm robotics is referred to as the gathering of spatially distributed robots into a single aggregate. Aggregation can be classified as cue-based or self-organized. In cue-based aggregation, there is a cue in the environment that points to the aggregation area, whereas in self-organized aggregation no cue is present. In this paper, we proposed a novel fuzzy-based method for cue-based aggregation based on the state-of-the-art BEECLUST algorithm. In particular, we proposed three different methods: naïve, that uses a deterministic decision-making mechanism; vector-averaging, using a vectorial summation of all perceived inputs; and fuzzy, that uses a fuzzy logic controller. We used different experiment settings: one-source and two-source environments with static and dynamic conditions to compare all the methods. We observed that the fuzzy method outperformed all the other methods and it is the most robust method against noise.

    @article{lirolem13932,
              volume = {22},
              number = {3},
               month = {June},
              author = {Farshad Arvin and Ali Emre Turgut and Farhad Bazyari and Kutluk Bilge Arikan and Nicola Bellotto and Shigang Yue},
               title = {Cue-based aggregation with a mobile robot swarm: a novel fuzzy-based method},
           publisher = {Sage for International Society for Adaptive Behavior (ISAB)},
                year = {2014},
             journal = {Adaptive Behavior},
               pages = {189--206},
            keywords = {ARRAY(0x7fdc781a34a0)},
                 url = {http://eprints.lincoln.ac.uk/13932/},
            abstract = {Aggregation in swarm robotics is referred to as the gathering of spatially distributed robots into a single aggregate. Aggregation can be classified as cue-based or self-organized. In cue-based aggregation, there is a cue in the environment that points to the aggregation area, whereas in self-organized aggregation no cue is present. In this paper, we proposed a novel fuzzy-based method for cue-based aggregation based on the state-of-the-art BEECLUST algorithm. In particular, we proposed three different methods: na{\"i}ve, that uses a deterministic decision-making mechanism; vector-averaging, using a vectorial summation of all perceived inputs; and fuzzy, that uses a fuzzy logic controller. We used different experiment settings: one-source and two-source environments with static and dynamic conditions to compare all the methods. We observed that the fuzzy method outperformed all the other methods and it is the most robust method against noise.}
    }
  • A. Attar, X. Xie, C. Zhang, Z. Wang, and S. Yue, “Wireless Micro-Ball endoscopic image enhancement using histogram information,” in Conference proceedings of the 2014 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Institute of Electrical and Electronics Engineers Inc., 2014, pp. 3337-3340.
    [BibTeX] [Abstract] [EPrints]

    Wireless endoscopy systems is a new innovative method widely used for gastrointestinal tract examination in recent decade. Wireless Micro-Ball endoscopy system with multiple image sensors is the newest proposed method which can make a full view image of the gastrointestinal tract. But still the quality of images from this new wireless endoscopy system is not satisfactory. It’s hard for doctors and specialist to easily examine and interpret the captured images. The image features also are not distinct enough to be used for further processing. So as to enhance these low-contrast endoscopic images a new image enhancement method based on the endoscopic images features and color distribution is proposed in this work. The enhancement method is performed on three main steps namely color space transformation, edge preserving mask formation, and histogram information correction. The luminance component of CIE Lab, YCbCr, and HSV color space is enhanced in this method and then two other components added finally to form an enhanced color image. The experimental result clearly show the robustness of the method. \copyright 2014 IEEE.

    @incollection{lirolem17582,
               month = {August},
              author = {Abdolrahman Attar and Xiang Xie and Chun Zhang and Zhihua Wang and Shigang Yue},
                note = {Conference of 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014 ; Conference Date: 26 - 30 August 2014; Chicago, USA  Conference Code:109045},
           booktitle = {Conference proceedings of the 2014 Annual International Conference of the IEEE Engineering in Medicine and Biology Society},
               title = {Wireless Micro-Ball endoscopic image enhancement using histogram information},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
                year = {2014},
             journal = {2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014},
               pages = {3337--3340},
            keywords = {ARRAY(0x7fdc7808ac58)},
                 url = {http://eprints.lincoln.ac.uk/17582/},
            abstract = {Wireless endoscopy systems is a new innovative method widely used for gastrointestinal tract examination in recent decade. Wireless Micro-Ball endoscopy system with multiple image sensors is the newest proposed method which can make a full view image of the gastrointestinal tract. But still the quality of images from this new wireless endoscopy system is not satisfactory. It's hard for doctors and specialist to easily examine and interpret the captured images. The image features also are not distinct enough to be used for further processing. So as to enhance these low-contrast endoscopic images a new image enhancement method based on the endoscopic images features and color distribution is proposed in this work. The enhancement method is performed on three main steps namely color space transformation, edge preserving mask formation, and histogram information correction. The luminance component of CIE Lab, YCbCr, and HSV color space is enhanced in this method and then two other components added finally to form an enhanced color image. The experimental result clearly show the robustness of the method. {\copyright} 2014 IEEE.}
    }
  • M. Barnes, “Computer vision based detection and identification of potato blemishes,” PhD Thesis, 2014.
    [BibTeX] [Abstract] [EPrints]

    .

    @phdthesis{lirolem14568,
               month = {July},
               title = {Computer vision based detection and identification of potato blemishes},
              school = {University of Lincoln},
              author = {Michael Barnes},
                year = {2014},
            keywords = {ARRAY(0x7fdc7806a0e0)},
                 url = {http://eprints.lincoln.ac.uk/14568/},
            abstract = {.}
    }
  • A. Cheung, M. Collett, T. S. Collett, A. Dewar, F. Dyer, P. Graham, M. Mangan, A. Narendra, A. Philippides, W. Stürzl, B. Webb, A. Wystrach, and J. Zeil, “Still no convincing evidence for cognitive map use by honeybees,” Proceedings of the National Academy of Sciences, vol. 111, iss. 42, p. E4396–E4397, 2014.
    [BibTeX] [Abstract] [EPrints]

    Cheeseman et al. (1) claim that an ability of honey bees to travel home through a landscape with conflicting information from a celestial compass proves the bees’ use of a cognitive map. Their claim involves a curious assumption about the visual information that can be extracted from the terrain: that there is sufficient information for a bee to identify where it is, but insufficient to guide its path without resorting to a cognitive map. We contend that the authors? claims are unfounded.

    @article{lirolem23584,
              volume = {111},
              number = {42},
               month = {October},
              author = {Allen Cheung and Matthew Collett and Thomas S. Collett and Alex Dewar and Fred Dyer and Paul Graham and Michael Mangan and Ajay Narendra and Andrew Philippides and Wolfgang St{\"u}rzl and Barbara Webb and Antoine Wystrach and Jochen Zeil},
               title = {Still no convincing evidence for cognitive map use by honeybees},
           publisher = {National Academy of Sciences},
                year = {2014},
             journal = {Proceedings of the National Academy of Sciences},
               pages = {E4396--E4397},
            keywords = {ARRAY(0x7fdc7816bc18)},
                 url = {http://eprints.lincoln.ac.uk/23584/},
            abstract = {Cheeseman et al. (1) claim that an ability of honey bees to travel home through a landscape with conflicting information from a celestial compass proves the bees' use of a cognitive map. Their claim involves a curious assumption about the visual information that can be extracted from the terrain: that there is sufficient information for a bee to identify where it is, but insufficient to guide its path without resorting to a cognitive map. We contend that the authors? claims are unfounded.}
    }
  • H. Cuayahuitl, I. Kruijff-Korbayová, and N. Dethlefs, “Nonstrict hierarchical reinforcement learning for interactive systems and robots,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 4, iss. 3, p. 15, 2014.
    [BibTeX] [Abstract] [EPrints]

    Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users.

    @article{lirolem22211,
              volume = {4},
              number = {3},
               month = {October},
              author = {Heriberto Cuayahuitl and Ivana Kruijff-Korbayov{\'a} and Nina Dethlefs},
               title = {Nonstrict hierarchical reinforcement learning for interactive systems and robots},
           publisher = {Association for Computing Machinery (ACM)},
                year = {2014},
             journal = {ACM Transactions on Interactive Intelligent Systems (TiiS)},
               pages = {15},
            keywords = {ARRAY(0x7fdc7816bca8)},
                 url = {http://eprints.lincoln.ac.uk/22211/},
            abstract = {Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users.}
    }
  • H. Cuayahuitl, L. Frommberger, N. Dethlefs, A. Raux, M. Marge, and H. Zender, “Introduction to the special issue on Machine learning for multiple modalities in interactive systems and robots,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 4, iss. 3, p. 12e, 2014.
    [BibTeX] [Abstract] [EPrints]

    This special issue highlights research articles that apply machine learning to robots and other systems that interact with users through more than one modality, such as speech, gestures, and vision. For example, a robot may coordinate its speech with its actions, taking into account (audio-)visual feedback during their execution. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. The articles in this special issue represent examples that contribute to filling this gap.

    @article{lirolem22212,
              volume = {4},
              number = {3},
               month = {October},
              author = {Heriberto Cuayahuitl and Lutz Frommberger and Nina Dethlefs and Antoine Raux and Mathew Marge and Hendrik Zender},
               title = {Introduction to the special issue on Machine learning for multiple modalities in interactive systems and robots},
           publisher = {Association for Computing Machinery (ACM)},
                year = {2014},
             journal = {ACM Transactions on Interactive Intelligent Systems (TiiS)},
               pages = {12e},
            keywords = {ARRAY(0x7fdc7816bc78)},
                 url = {http://eprints.lincoln.ac.uk/22212/},
            abstract = {This special issue highlights research articles that apply machine learning to robots and other systems that interact with users through more than one modality, such as speech, gestures, and vision. For example, a robot may coordinate its speech with its actions, taking into account (audio-)visual feedback during their execution. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. The articles in this special issue represent examples that contribute to filling this gap.}
    }
  • N. Dethlefs and H. Cuayahuitl, “Hierarchical reinforcement learning for situated language generation,” Natural Language Engineering, vol. 21, iss. 3, pp. 391-435, 2014.
    [BibTeX] [Abstract] [EPrints]

    Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human?human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.

    @article{lirolem22213,
              volume = {21},
              number = {3},
               month = {May},
              author = {Nina Dethlefs and Heriberto Cuayahuitl},
               title = {Hierarchical reinforcement learning for situated language generation},
           publisher = {Cambridge University Press},
                year = {2014},
             journal = {Natural Language Engineering},
               pages = {391--435},
            keywords = {ARRAY(0x7fdc781cc2a0)},
                 url = {http://eprints.lincoln.ac.uk/22213/},
            abstract = {Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human?human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.}
    }
  • C. Dondrup, M. Hanheide, and N. Bellotto, “A probabilistic model of human-robot spatial interaction using a qualitative trajectory calculus,” in AAAI Spring Symposium: "Qualitative Representations for Robots", 2014.
    [BibTeX] [Abstract] [EPrints]

    In this paper we propose a probabilistic model for Human-Robot Spatial Interaction (HRSI) using a Qualitative Trajectory Calculus (QTC). In particular, we will build on previous work representing HRSI as a Markov chain of QTC states and evolve this to an approach using a Hidden Markov Model representation. Our model accounts for the invalidity of certain transitions within the QTC to reduce the complexity of the probabilistic model and to ensure state sequences in accordance to this representational framework. We show the appropriateness of our approach by using the probabilistic model to encode different HRSI behaviours observed in a human-robot interaction study and show how the models can be used to classify these behaviours reliably. Copyright Â\copyright 2014, Association for the Advancement of Artificial Intelligence. All rights reserved.

    @inproceedings{lirolem13523,
           booktitle = {AAAI Spring Symposium: "Qualitative Representations for Robots"},
               month = {March},
               title = {A probabilistic model of human-robot spatial interaction using a qualitative trajectory calculus},
              author = {Christian Dondrup and Marc Hanheide and Nicola Bellotto},
           publisher = {AAAI / AI Access Foundation},
                year = {2014},
            keywords = {ARRAY(0x7fdc781cc2e8)},
                 url = {http://eprints.lincoln.ac.uk/13523/},
            abstract = {In this paper we propose a probabilistic model for Human-Robot Spatial Interaction (HRSI) using a Qualitative Trajectory Calculus (QTC). In particular, we will build on previous work representing HRSI as a Markov chain of QTC states and evolve this to an approach using a Hidden Markov Model representation. Our model accounts for the invalidity of certain transitions within the QTC to reduce the complexity of the probabilistic model and to ensure state sequences in accordance to this representational framework. We show the appropriateness of our approach by using the probabilistic model to encode different HRSI behaviours observed in a human-robot interaction study and show how the models can be used to classify these behaviours reliably. Copyright {\^A}{\copyright} 2014, Association for the Advancement of Artificial Intelligence. All rights reserved.}
    }
  • C. Dondrup, N. Bellotto, and M. Hanheide, “Social distance augmented qualitative trajectory calculus for human-robot spatial interaction,” in Robot and Human Interactive Communication, 2014 RO-MAN, 2014, pp. 519-524.
    [BibTeX] [Abstract] [EPrints]

    In this paper we propose to augment a wellestablished Qualitative Trajectory Calculus (QTC) by incorporating social distances into the model to facilitate a richer and more powerful representation of Human-Robot Spatial Interaction (HRSI). By combining two variants of QTC that implement different resolutions and switching between them based on distance thresholds we show that we are able to both reduce the complexity of the representation and at the same time enrich QTC with one of the core HRSI concepts: proxemics. Building on this novel integrated QTC model, we propose to represent the joint spatial behaviour of a human and a robot employing a probabilistic representation based on Hidden Markov Models. We show the appropriateness of our approach by encoding different HRSI behaviours observed in a human-robot interaction study and show how the models can be used to represent and classify these behaviours using social distance-augmented QTC.

    @inproceedings{lirolem15832,
           booktitle = {Robot and Human Interactive Communication, 2014 RO-MAN},
               month = {October},
               title = {Social distance augmented qualitative trajectory calculus for human-robot spatial interaction},
              author = {Christian Dondrup and Nicola Bellotto and Marc Hanheide},
           publisher = {IEEE},
                year = {2014},
               pages = {519--524},
            keywords = {ARRAY(0x7fdc7816bbb8)},
                 url = {http://eprints.lincoln.ac.uk/15832/},
            abstract = {In this paper we propose to augment a wellestablished Qualitative Trajectory Calculus (QTC) by incorporating social distances into the model to facilitate a richer and more powerful representation of Human-Robot Spatial Interaction (HRSI). By combining two variants of QTC that implement different resolutions and switching between them based on distance thresholds we show that we are able to both reduce the complexity of the representation and at the same time enrich QTC with one of the core HRSI concepts: proxemics. Building on this novel integrated QTC model, we propose to represent the joint spatial behaviour of a human and a robot employing a probabilistic representation based on Hidden Markov Models. We show the appropriateness of our approach by encoding different HRSI behaviours observed in a human-robot interaction study and show how the models can be used to represent and classify these behaviours using  social distance-augmented QTC.}
    }
  • C. Dondrup, C. Lichtenthaeler, and M. Hanheide, “Hesitation signals in human-robot head-on encounters: a pilot study,” in 9th ACM/IEEE International Conference on Human Robot Interaction, 2014, pp. 154-155.
    [BibTeX] [Abstract] [EPrints]

    The motivation for this research stems from the future vision of being able to buy a mobile service robot for your own household, unpack it, switch it on, and have it behave in an intelligent way; but of course it also has to adapt to your personal preferences over time. My work is focusing on the spatial aspect of the robot?s behaviours, which means when it is moving in a confined, shared space with a human it will also take the communicative character of these movements into account. This adaptation to the users preferences should come from experience which the robot gathers throughout several days or months of interaction and not from a programmer hard-coding certain behaviours

    @inproceedings{lirolem13570,
           booktitle = {9th ACM/IEEE International Conference on Human Robot Interaction},
               month = {March},
               title = {Hesitation signals in human-robot head-on encounters: a pilot study},
              author = {Christian Dondrup and Christina Lichtenthaeler and Marc Hanheide},
           publisher = {IEEE},
                year = {2014},
               pages = {154--155},
            keywords = {ARRAY(0x7fdc781cc2d0)},
                 url = {http://eprints.lincoln.ac.uk/13570/},
            abstract = {The motivation for this research stems from the future vision of being able to buy a mobile service robot for your own household, unpack it, switch it on, and have it behave in an intelligent way; but of course it also has to adapt to your personal preferences over time. My work is focusing on the spatial aspect of the robot?s behaviours, which means when it is moving in a confined, shared space with a human it will also take the communicative character of these movements into account. This adaptation to the users preferences should come from experience which the robot gathers throughout several days or months of interaction and not from a programmer hard-coding certain behaviours}
    }
  • T. Duckett and T. Krajnik, “A frequency-based approach to long-term robotic mapping,” in ICRA 2014 Workshop on Long Term Autonomy, 2014.
    [BibTeX] [Abstract] [EPrints]

    While mapping of static environments has been widely studied, long-term mapping in non-stationary environments is still an open problem. In this talk, we present a novel approach for long-term representation of populated environments, where many of the observed changes are caused by humans performing their daily activities. We propose to model the environment’s dynamics by its frequency spectrum, as a combination of harmonic functions that correspond to periodic processes influencing the environment. Such a representation not only allows representation of environment dynamics over arbitrary timescales with constant memory requirements, but also prediction of future environment states. The proposed approach can be applied to many of the state-of-the-art environment models. In particular, we show that occupancy grids, topological or landmark maps can be easily extended to represent dynamic environments. We present experiments using data collected by a mobile robot patrolling an indoor environment over a period of one month, where frequency-enhanced models were compared to their static counterparts in four scenarios: i) 3D map building, ii) environment state prediction, iii) topological localisation and iv) anomaly detection, in order to verify the model’s ability to detect unusual events. In all these cases, the frequency-enhanced models outperformed their static counterparts.

    @inproceedings{lirolem14422,
           booktitle = {ICRA 2014 Workshop on Long Term Autonomy},
               month = {June},
               title = {A frequency-based approach to long-term robotic mapping},
              author = {Tom Duckett and Tomas Krajnik},
                year = {2014},
            keywords = {ARRAY(0x7fdc781a34e8)},
                 url = {http://eprints.lincoln.ac.uk/14422/},
            abstract = {While mapping of static environments has been widely studied, long-term mapping in non-stationary environments is still an open problem. In this talk, we present a novel approach for long-term representation of populated environments, where many of the observed changes are caused by humans performing their daily activities. We propose to model the environment's dynamics by its frequency spectrum, as a combination of harmonic functions that correspond to periodic processes influencing the environment. Such a representation not only allows representation of environment dynamics over arbitrary timescales with constant memory requirements, but also prediction of future environment states. The proposed approach can be applied to many of the state-of-the-art environment models. In particular, we show that occupancy grids, topological or landmark maps can be easily extended to represent dynamic environments. We present experiments using data collected by a mobile robot patrolling an indoor environment over a period of one month, where frequency-enhanced models were compared to their static counterparts in four scenarios: i) 3D map building, ii) environment state prediction, iii) topological localisation and iv) anomaly detection, in order to verify the model's ability to detect unusual events. In all these cases, the frequency-enhanced models outperformed their static counterparts.}
    }
  • C. Hu, F. Arvin, and S. Yue, “Development of a bio-inspired vision system for mobile micro-robots,” in IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014, pp. 81-86.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts’ vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control.

    @inproceedings{lirolem16334,
           booktitle = {IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob)},
               month = {October},
               title = {Development of a bio-inspired vision system for mobile micro-robots},
              author = {Cheng Hu and Farshad Arvin and Shigang Yue},
           publisher = {IEEE},
                year = {2014},
               pages = {81--86},
            keywords = {ARRAY(0x7fdc7816bc48)},
                 url = {http://eprints.lincoln.ac.uk/16334/},
            abstract = {In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control.}
    }
  • K. Iliopoulos, N. Bellotto, and N. Mavridis, “From sequence to trajectory and vice versa: solving the inverse QTC problem and coping with real-world trajectories,” in AAAI Spring Symposium: "Qualitative Representations for Robots", 2014.
    [BibTeX] [Abstract] [EPrints]

    Spatial interactions between agents carry information of high value to human observers, as exemplified by the high-level interpretations that humans make when watching the Heider and Simmel movie, or other such videos which just contain motions of simple objects, such as points, lines and triangles. However, not all the information contained in a pair of continuous trajectories is important; and thus the need for qualitative descriptions of interaction trajectories arises. Towards that purpose, Qualitative Trajectory Calculus (QTC) has been proposed in (Van de Weghe, 2004). However, the original definition of QTC handles uncorrupted continuous-time trajectories, while real-world signals are noisy and sampled in discrete-time. Also, although QTC presents a method for transforming trajectories to qualitative descriptions, the inverse problem has not yet been studied. Thus, in this paper, after discussing several aspects of the transition from ideal QTC to discrete-time noisy QTC, we introduce a novel algorithm for solving the QTC inverse problem; i.e. transforming qualitative descriptions to archetypal trajectories that satisfy them. Both of these problems are particularly important for the successful application of qualitative trajectory calculus to Human-Robot Interaction.

    @inproceedings{lirolem13519,
           booktitle = {AAAI Spring Symposium: "Qualitative Representations for Robots"},
               month = {March},
               title = {From sequence to trajectory and vice versa: solving the inverse QTC problem and coping with real-world trajectories},
              author = {Konstantinos Iliopoulos and Nicola Bellotto and Nikolaos Mavridis},
           publisher = {AAAI},
                year = {2014},
            keywords = {ARRAY(0x7fdc781cc2b8)},
                 url = {http://eprints.lincoln.ac.uk/13519/},
            abstract = {Spatial interactions between agents carry information of high value to human observers, as exemplified by the high-level interpretations that humans make when watching the Heider and Simmel movie, or other such videos which just contain motions of simple objects, such as points, lines and triangles. However, not all the information contained in a pair of continuous trajectories is important; and thus the need for qualitative descriptions of interaction trajectories arises. Towards that purpose, Qualitative Trajectory Calculus (QTC) has been proposed in (Van de Weghe, 2004). However, the original definition of QTC handles uncorrupted continuous-time trajectories, while real-world signals are noisy and sampled in discrete-time. Also, although QTC presents a method for transforming trajectories to qualitative descriptions, the inverse problem has not yet been studied. Thus, in this paper, after discussing several aspects of the transition from ideal QTC to discrete-time noisy QTC, we introduce a novel algorithm for solving the QTC inverse problem; i.e. transforming qualitative descriptions to archetypal trajectories that satisfy them. Both of these problems are particularly important for the successful application of qualitative trajectory calculus to Human-Robot Interaction.}
    }
  • T. Krajnik, J. Santos, B. Seemann, and T. Duckett, “FROctomap: an efficient spatio-temporal environment representation,” in Advances in Autonomous Robotics Systems, M. Mistry, A. Leonardis, and M. Witkowski, Eds., Springer International Publishing, 2014, vol. 8717, pp. 281-282.
    [BibTeX] [Abstract] [EPrints]

    We present a novel software tool intended for mobile robot mapping in long-term scenarios. The method allows for efficient volumetric representation of dynamic three-dimensional environments over long periods of time. It is based on a combination of a well-established 3D mapping framework called Octomaps and an idea to model environment dynamics by its frequency spectrum. The proposed method allows not only for efficient representation, but also reliable prediction of the future states of dynamic three-dimensional environments. Our spatio-temporal mapping framework is available as an open-source C++ library and a ROS module which allows its easy integration in robotics projects.

    @incollection{lirolem14895,
              volume = {8717},
               month = {September},
              author = {Tomas Krajnik and Joao Santos and Bianca Seemann and Tom Duckett},
              series = {Lecture Notes in Computer Science},
           booktitle = {Advances in Autonomous Robotics Systems},
              editor = {Michael Mistry and Ale Leonardis and Mark Witkowski},
               title = {FROctomap: an efficient spatio-temporal environment representation},
           publisher = {Springer International Publishing},
                year = {2014},
               pages = {281--282},
            keywords = {ARRAY(0x7fdc7806a128)},
                 url = {http://eprints.lincoln.ac.uk/14895/},
            abstract = {We present a novel software tool intended for mobile robot mapping in long-term scenarios. The method allows for efficient volumetric representation of dynamic three-dimensional environments over long periods of time. It is based on a combination of a well-established 3D mapping framework called Octomaps and an idea to model environment dynamics by its frequency spectrum. The proposed method allows not only for efficient representation, but also reliable prediction of the future states of dynamic three-dimensional environments. Our spatio-temporal mapping framework is available as an open-source C++ library and a ROS module which allows its easy integration in robotics projects.}
    }
  • T. Krajnik, J. P. Fentanes, O. M. Mozos, T. Duckett, J. Ekekrantz, and M. Hanheide, “Long-term topological localisation for service robots in dynamic environments using spectral maps,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a new approach for topological localisation of service robots in dynamic indoor environments. In contrast to typical localisation approaches that rely mainly on static parts of the environment, our approach makes explicit use of information about changes by learning and modelling the spatio-temporal dynamics of the environment where the robot is acting. The proposed spatio-temporal world model is able to predict environmental changes in time, allowing the robot to improve its localisation capabilities during long-term operations in populated environments. To investigate the proposed approach, we have enabled a mobile robot to autonomously patrol a populated environment over a period of one week while building the proposed model representation. We demonstrate that the experience learned during one week is applicable for topological localization even after a hiatus of three months by showing that the localization error rate is significantly lower compared to static environment representations.

    @inproceedings{lirolem14423,
           booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               month = {September},
               title = {Long-term topological localisation for service robots in dynamic environments using spectral maps},
              author = {Tomas Krajnik and Jaime Pulido Fentanes and Oscar Martinez Mozos and Tom Duckett and Johan Ekekrantz and Marc Hanheide},
           publisher = {IEEE},
                year = {2014},
            keywords = {ARRAY(0x7fdc7816bd08)},
                 url = {http://eprints.lincoln.ac.uk/14423/},
            abstract = {This paper presents a new approach for topological localisation of service robots in dynamic indoor environments. In contrast to typical localisation approaches that rely mainly on static parts of the environment, our approach makes explicit use of information about changes by learning and modelling the spatio-temporal dynamics of the environment where the robot is acting.  The proposed spatio-temporal world model is able to predict environmental changes in time, allowing the robot to improve its localisation capabilities during long-term operations in populated environments. To investigate the proposed approach, we have enabled a mobile robot to autonomously patrol a populated environment over a period of one week while building the proposed model representation. We demonstrate that the experience learned during one week is applicable for topological localization even after a hiatus of three months by showing that the localization error rate is significantly lower compared to static environment representations.}
    }
  • T. Krajnik, J. P. Fentanes, G. Cielniak, C. Dondrup, and T. Duckett, “Spectral analysis for long-term robotic mapping,” in 2014 IEEE International Conference on Robotics and Automation (ICRA 2014), 2014.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a new approach to mobile robot mapping in long-term scenarios. So far, the environment models used in mobile robotics have been tailored to capture static scenes and dealt with the environment changes by means of ?memory decay?. While these models keep up with slowly changing environments, their utilization in dynamic, real world environments is difficult. The representation proposed in this paper models the environment?s spatio-temporal dynamics by its frequency spectrum. The spectral representation of the time domain allows to identify, analyse and remember regularly occurring environment processes in a computationally efficient way. Knowledge of the periodicity of the different environment processes constitutes the model predictive capabilities, which are especially useful for long-term mobile robotics scenarios. In the experiments presented, the proposed approach is applied to data collected by a mobile robot patrolling an indoor environment over a period of one week. Three scenarios are investigated, including intruder detection and 4D mapping. The results indicate that the proposed method allows to represent arbitrary timescales with constant (and low) memory requirements, achieving compression rates up to 106 . Moreover, the representation allows for prediction of future environment?s state with $\sim$ 90\% precision.

    @inproceedings{lirolem13273,
           booktitle = {2014 IEEE International Conference on Robotics and Automation (ICRA 2014)},
               month = {May},
               title = {Spectral analysis for long-term robotic mapping},
              author = {Tomas Krajnik and Jaime Pulido Fentanes and Grzegorz Cielniak and Christian Dondrup and Tom Duckett},
           publisher = {IEEE},
                year = {2014},
            keywords = {ARRAY(0x7fdc780ad968)},
                 url = {http://eprints.lincoln.ac.uk/13273/},
            abstract = {This paper presents a new approach to mobile robot mapping in long-term scenarios. So far, the environment models used in mobile robotics have been tailored to capture static scenes and dealt with the environment changes by means of ?memory decay?. While these models keep up with slowly changing environments, their utilization in dynamic, real world
    environments is difficult.
    
    The representation proposed in this paper models the environment?s spatio-temporal dynamics by its frequency spectrum. The spectral representation of the time domain allows to identify, analyse and remember regularly occurring environment processes in a computationally efficient way. Knowledge of the periodicity of the different environment processes constitutes the model predictive capabilities, which are especially useful for long-term mobile robotics scenarios.
    
    In the experiments presented, the proposed approach is applied to data collected by a mobile robot patrolling an indoor
    environment over a period of one week. Three scenarios are investigated, including intruder detection and 4D mapping. The results indicate that the proposed method allows to represent arbitrary timescales with constant (and low) memory requirements, achieving compression rates up to 106 . Moreover, the representation allows for prediction of future environment?s state with {$\sim$} 90\% precision.}
    }
  • T. Krajnik, N. Matias, J. Faigl, P. Vanek, M. Saska, L. Preucil, T. Duckett, and M. Marta, “A practical multirobot localization system,” Journal of Intelligent and Robotic Systems, vol. 76, iss. 3-4, pp. 539-562, 2014.
    [BibTeX] [Abstract] [EPrints]

    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method’s mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at $\backslash$emph\http://purl.org/robotics/whycon\; so, it can be used as an enabling technology for various mobile robotic problems.

    @article{lirolem13653,
              volume = {76},
              number = {3-4},
               month = {December},
              author = {Tomas Krajnik and Nitsche Matias and Jan Faigl and Petr Vanek and Martin Saska and Libor Preucil and Tom Duckett and Mejail Marta},
               title = {A practical multirobot localization system},
           publisher = {Springer Heidelberg},
                year = {2014},
             journal = {Journal of Intelligent and Robotic Systems},
               pages = {539--562},
            keywords = {ARRAY(0x7fdc7816bb58)},
                 url = {http://eprints.lincoln.ac.uk/13653/},
            abstract = {We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments.  Apart from the method description, we also make its source code public at {$\backslash$}emph\{http://purl.org/robotics/whycon\}; so, it can be used as an enabling technology for various mobile robotic problems.}
    }
  • S. Lemaignan, M. Hanheide, M. Karg, H. Khambhaita, L. Kunze, F. Lier, I. L~A?tkebohle, and G. Milliez, “Simulation and HRI recent perspectives with the MORSE simulator,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8810, pp. 13-24, 2014.
    [BibTeX] [Abstract] [EPrints]

    Simulation in robotics is often a love-hate relationship: while simulators do save us a lot of time and effort compared to regular deployment of complex software architectures on complex hardware, simulators are also known to evade many of the real issues that robots need to manage when they enter the real world. Because humans are the paragon of dynamic, unpredictable, complex, real world entities, simulation of human-robot interactions may look condemn to fail, or, in the best case, to be mostly useless. This collective article reports on five independent applications of the MORSE simulator in the field of human-robot interaction: It appears that simulation is already useful, if not essential, to successfully carry out research in the field of HRI, and sometimes in scenarios we do not anticipate. Â\copyright 2014 Springer International Publishing Switzerland.

    @article{lirolem21430,
              volume = {8810},
               month = {October},
              author = {S. Lemaignan and M. Hanheide and M. Karg and H. Khambhaita and L. Kunze and F. Lier and I. L{\~A}?tkebohle and G. Milliez},
                note = {Find out how to access preview-only content
    Chapter
    Simulation, Modeling, and Programming for Autonomous Robots
    Volume 8810 of the series Lecture Notes in Computer Science pp 13-24
    Simulation and HRI Recent Perspectives with the MORSE Simulator
    
    S{\'e}verin Lemaignan, Marc Hanheide, Michael Karg, Harmish Khambhaita, Lars Kunze, Florian Lier, Ingo L{\"u}tkebohle, Gr{\'e}goire Milliez
    Buy chapter
    \$29.95 / ?24.95 / {\pounds}19.95 *
    Buy eBook
    \$89.00 / ?67.82 / {\pounds}56.99*
    * Final gross prices may vary according to local VAT.Get Access
    Abstract
    Simulation in robotics is often a love-hate relationship: while simulators do save us a lot of time and effort compared to regular deployment of complex software architectures on complex hardware, simulators are also known to evade many of the real issues that robots need to manage when they enter the real world. Because humans are the paragon of dynamic, unpredictable, complex, real world entities, simulation of human-robot interactions may look condemn to fail, or, in the best case, to be mostly useless. This collective article reports on five independent applications of the MORSE simulator in the field of human-robot interaction: It appears that simulation is already useful, if not essential, to successfully carry out research in the field of HRI, and sometimes in scenarios we do not anticipate.
    Simulation, Modeling, and Programming for Autonomous RobotsSimulation, Modeling, and Programming for Autonomous Robots Look 
    Inside
    Chapter Metrics
    Downloads
    367
    Provided by Bookmetrix
    Reference tools
    Export citation
    Add to Papers
    Other actions
    About this Book
    Reprints and Permissions
    Share
    Share this content on Facebook Share this content on Twitter Share this content on LinkedIn
    Supplementary Material (0)
    References (24)
    
    About this Chapter
    Title
    Simulation and HRI Recent Perspectives with the MORSE Simulator
    Book Title
    Simulation, Modeling, and Programming for Autonomous Robots
    Book Subtitle
    4th International Conference, SIMPAR 2014, Bergamo, Italy, October 20-23, 2014. Proceedings},
               title = {Simulation and HRI recent perspectives with the MORSE simulator},
           publisher = {Springer Verlag},
                year = {2014},
             journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
               pages = {13--24},
            keywords = {ARRAY(0x7fdc7816bbe8)},
                 url = {http://eprints.lincoln.ac.uk/21430/},
            abstract = {Simulation in robotics is often a love-hate relationship: while simulators do save us a lot of time and effort compared to regular deployment of complex software architectures on complex hardware, simulators are also known to evade many of the real issues that robots need to manage when they enter the real world. Because humans are the paragon of dynamic, unpredictable, complex, real world entities, simulation of human-robot interactions may look condemn to fail, or, in the best case, to be mostly useless. This collective article reports on five independent applications of the MORSE simulator in the field of human-robot interaction: It appears that simulation is already useful, if not essential, to successfully carry out research in the field of HRI, and sometimes in scenarios we do not anticipate. {\^A}{\copyright} 2014 Springer International Publishing Switzerland.}
    }
  • D. Liu and S. Yue, “Spiking neural network for visual pattern recognition,” in International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014, 2014, pp. 1-5.
    [BibTeX] [Abstract] [EPrints]

    Most of visual pattern recognition algorithms try to emulate the mechanism of visual pathway within the human brain. Regarding of classic face recognition task, by using the spatiotemporal information extracted from Spiking neural network (SNN), batch learning rule and on-line learning rule stand out from their competitors. However, the former one simply considers the average pattern within the class, and the latter one just relies on the nearest relevant single pattern. In this paper, a novel learning rule and its SNN framework has been proposed. It considers all relevant patterns in the local domain around the undetermined sample rather than just nearest relevant single pattern. Experimental results show the proposed learning rule and its SNN framework obtains satisfactory testing results under the ORL face database.

    @inproceedings{lirolem16638,
           booktitle = {International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014},
               title = {Spiking neural network for visual pattern recognition},
              author = {Daqi Liu and Shigang Yue},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
                year = {2014},
               pages = {1--5},
             journal = {Processing of 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014},
            keywords = {ARRAY(0x7fdc781509b0)},
                 url = {http://eprints.lincoln.ac.uk/16638/},
            abstract = {Most of visual pattern recognition algorithms try to emulate the mechanism of visual pathway within the human brain. Regarding of classic face recognition task, by using the spatiotemporal information extracted from Spiking neural network (SNN), batch learning rule and on-line learning rule stand out from their competitors. However, the former one simply considers the average pattern within the class, and the latter one just relies on the nearest relevant single pattern. In this paper, a novel learning rule and its SNN framework has been proposed. It considers all relevant patterns in the local domain around the undetermined sample rather than just nearest relevant single pattern. Experimental results show the proposed learning rule and its SNN framework obtains satisfactory testing results under the ORL face database.}
    }
  • H. Liu and S. Yue, “An efficient method to structural static reanalysis with deleting support constraints,” Structural Engineering and Mechanics, vol. 52, iss. 6, pp. 1121-1134, 2014.
    [BibTeX] [Abstract] [EPrints]

    Structural design is usually an optimization process. Numerous parameters such as the member shapes and sizes, the elasticity modulus of material, the locations of nodes and the support constraints can be selected as design variables. These variables are progressively revised in order to obtain a satisfactory structure. Each modification requires a fresh analysis for the displacements and stresses, and reanalysis can be employed to reduce the computational cost. This paper is focused on static reanalysis problem with modification of deleting some supports. An efficient reanalysis method is proposed. The method makes full use of the initial information and preserves the ease of implementation. Numerical examples show that the calculated results of the proposed method are the identical as those of the direct analysis, while the computational time is remarkably reduced.

    @article{lirolem16505,
              volume = {52},
              number = {6},
               month = {December},
              author = {H. Liu and Shigang Yue},
               title = {An efficient method to structural static reanalysis with deleting support constraints},
           publisher = {Techno Press},
                year = {2014},
             journal = {Structural Engineering and Mechanics},
               pages = {1121--1134},
            keywords = {ARRAY(0x7fdc7816bb88)},
                 url = {http://eprints.lincoln.ac.uk/16505/},
            abstract = {Structural design is usually an optimization process. Numerous parameters such as the member shapes and sizes, the elasticity modulus of material, the locations of nodes and the support constraints can be selected as design variables. These variables are progressively revised in order to obtain a satisfactory structure. Each modification requires a fresh analysis for the displacements and stresses, and reanalysis can be employed to reduce the computational cost. This paper is focused on static reanalysis problem with modification of deleting some supports. An efficient reanalysis method is proposed. The method makes full use of the initial information and preserves the ease of implementation. Numerical examples show that the calculated results of the proposed method are the identical as those of the direct analysis, while the computational time is remarkably reduced.}
    }
  • F. Moreno, G. Cielniak, and T. Duckett, “Evaluation of laser range-finder mapping for agricultural spraying vehicles,” in Towards autonomous robotic systems, A. Natraj, S. Cameron, C. Melhuish, and M. Witkowski, Eds., Springer Berlin Heidelberg, 2014, vol. 8069, pp. 210-221.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a new application of laser range-finder sensing to agricultural spraying vehicles. The current generation of spraying vehicles use automatic controllers to maintain the height of the sprayer booms above the crop. However, these control systems are typically based on ultrasonic sensors mounted on the booms, which limits the accuracy of the measurements and the response of the controller to changes in the terrain, resulting in a sub-optimal spraying process. To overcome these limitations, we propose to use a laser scanner, attached to the front of the sprayer?s cabin, to scan the ground surface in front of the vehicle and to build a scrolling 3d map of the terrain. We evaluate the proposed solution in a series of field tests, demonstrating that the approach provides a more detailed and accurate representation of the environment than the current sonar-based solution, and which can lead to the development of more efficient boom control systems.

    @incollection{lirolem19647,
              volume = {8069},
               month = {June},
              author = {Francisco-Angel Moreno and Grzegorz Cielniak and Tom Duckett},
              series = {Lecture Notes in Computer Science},
                note = {14th Annual Conference, TAROS 2013, Oxford, UK, August 28--30, 2013, Revised Selected Papers},
           booktitle = {Towards autonomous robotic systems},
              editor = {Ashutosh Natraj and Stephen Cameron and Chris Melhuish and Mark Witkowski},
               title = {Evaluation of laser range-finder mapping for agricultural spraying vehicles},
           publisher = {Springer Berlin Heidelberg},
                year = {2014},
               pages = {210--221},
            keywords = {ARRAY(0x7fdc7806a0b0)},
                 url = {http://eprints.lincoln.ac.uk/19647/},
            abstract = {In this paper, we present a new application of laser range-finder sensing to agricultural spraying vehicles. The current generation of spraying vehicles use automatic controllers to maintain the height of the sprayer booms above the crop. However, these control systems are typically based on ultrasonic sensors mounted on the booms, which limits the accuracy of the measurements and the response of the controller to changes in the terrain, resulting in a sub-optimal spraying process. To overcome these limitations, we propose to use a laser scanner, attached to the front of the sprayer?s cabin, to scan the ground surface in front of the vehicle and to build a scrolling 3d map of the terrain. We evaluate the proposed solution in a series of field tests, demonstrating that the approach provides a more detailed and accurate representation of the environment than the current sonar-based solution, and which can lead to the development of more efficient boom control systems.}
    }
  • L. Shi, C. Zhang, and S. Yue, “Vector control IC for permanent magnet synchronous motor,” in 2014 IEEE International Conference on Electron Devices and Solid-State Circuits (EDSSC), 2014.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a full-digital vector control integrated circuit(IC) for permanent magnet synchronous motor (PMSM) with considering hardware structure. We adopted top-down and modular partitioning logic optimization design. Design specification of space vector pulse width modulation (SVPWM) unit, vector coordinate transformation are illustrated. All of the modules were implemented with pure hardware and designed with Verilog hardware description language (HDL). Moreover, the proposed design was verified by Simulink-Matlab and field programmable gate array (FPGA). \copyright 2014 IEEE.

    @inproceedings{lirolem17531,
               month = {June},
              author = {Licheng Shi and Chun Zhang and Shigang Yue},
                note = {Conference Code:111593},
           booktitle = {2014 IEEE International Conference on Electron Devices and Solid-State Circuits (EDSSC)},
               title = {Vector control IC for permanent magnet synchronous motor},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
             journal = {2014 IEEE International Conference on Electron Devices and Solid-State Circuits, EDSSC 2014},
                year = {2014},
            keywords = {ARRAY(0x7fdc781a7ae8)},
                 url = {http://eprints.lincoln.ac.uk/17531/},
            abstract = {This paper presents a full-digital vector control integrated circuit(IC) for permanent magnet synchronous motor (PMSM) with considering hardware structure. We adopted top-down and modular partitioning logic optimization design. Design specification of space vector pulse width modulation (SVPWM) unit, vector coordinate transformation are illustrated. All of the modules were implemented with pure hardware and designed with Verilog hardware description language (HDL). Moreover, the proposed design was verified by Simulink-Matlab and field programmable gate array (FPGA). {\copyright} 2014 IEEE.}
    }
  • T. Stone, M. Mangan, P. Ardin, and B. Webb, “Sky segmentation with ultraviolet images can be used for navigation,” in 2014 Robotics: Science and Systems Conference, 2014.
    [BibTeX] [Abstract] [EPrints]

    Inspired by ant navigation, we explore a method for sky segmentation using ultraviolet (UV) light. A standard camera is adapted to allow collection of outdoor images containing light in the visible range, in UV only and in green only. Automatic segmentation of the sky region using UV only is significantly more accurate and far more consistent than visible wavelengths over a wide range of locations, times and weather conditions, and can be accomplished with a very low complexity algorithm. We apply this method to obtain compact binary (sky vs non-sky) images from panoramic UV images taken along a 2km route in an urban environment. Using either sequence SLAM or a visual compass on these images produces reliable localisation and orientation on a subsequent traversal of the route under different weather conditions.

    @inproceedings{lirolem24748,
           booktitle = {2014 Robotics: Science and Systems Conference},
               month = {July},
               title = {Sky segmentation with ultraviolet images can be used for navigation},
              author = {Thomas Stone and Michael Mangan and Paul Ardin and Barbara Webb},
           publisher = {Robotics: Science and Systems},
                year = {2014},
             journal = {Robotics: Science and Systems},
            keywords = {ARRAY(0x7fdc780ad530)},
                 url = {http://eprints.lincoln.ac.uk/24748/},
            abstract = {Inspired by ant navigation, we explore a method for sky segmentation using ultraviolet (UV) light. A standard camera is adapted to allow collection of outdoor images containing light in the visible range, in UV only and in green only. Automatic segmentation of the sky region using UV only is significantly more accurate and far more consistent than visible wavelengths over a wide range of locations, times and weather conditions, and can be accomplished with a very low complexity algorithm. We apply this method to obtain compact binary (sky vs non-sky) images from panoramic UV images taken along a 2km route in an urban environment. Using either sequence SLAM or a visual compass on these images produces reliable localisation and orientation on a subsequent traversal of the route under different weather conditions.}
    }
  • Y. Tang, J. Peng, and S. Yue, “Cyclic and simultaneous iterative methods to matrix equations of the form AiX Bi = Fi,” Numerical Algorithms, vol. 66, iss. 2, pp. 379-397, 2014.
    [BibTeX] [Abstract] [EPrints]

    This paper deals with a general type of linear matrix equation problem. It presents new iterative algorithms to solve the matrix equations of the form AiX Bi = Fi. These algorithms are based on the incremental subgradient and the parallel subgradient methods. The convergence region of these algorithms are larger than other existing iterative algorithms. Finally, some experimental results are presented to show the efficiency of the proposed algorithms. Â\copyright 2013 Springer Science+Business Media New York.

    @article{lirolem11574,
              volume = {66},
              number = {2},
               month = {June},
              author = {Yuchao Tang and Jigen Peng and Shigang Yue},
               title = {Cyclic and simultaneous iterative methods to matrix equations of the form AiX Bi = Fi},
           publisher = {Springer},
                year = {2014},
             journal = {Numerical Algorithms},
               pages = {379--397},
            keywords = {ARRAY(0x7fdc780ad8f0)},
                 url = {http://eprints.lincoln.ac.uk/11574/},
            abstract = {This paper deals with a general type of linear matrix equation problem. It presents new iterative algorithms to solve the matrix equations of the form AiX Bi = Fi. These algorithms are based on the incremental subgradient and the parallel subgradient methods. The convergence region of these algorithms are larger than other existing iterative algorithms. Finally, some experimental results are presented to show the efficiency of the proposed algorithms. {\^A}{\copyright} 2013 Springer Science+Business Media New York.}
    }
  • P. Urcola, T. Duckett, and G. Cielniak, “On-line trajectory planning for autonomous spraying vehicles,” in International Workshop on Recent Advances in Agricultural Robotics, 2014.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a new application of on-line trajectory planning for autonomous sprayers. The current generation of these vehicles use automatic controllers to maintain the height of the spraying booms above the crop. However, such systems are typically based on ultrasonic sensors mounted directly on the booms, which limits the response of the controller to changes in the terrain, resulting in a suboptimal spraying process. To overcome these limitations, we propose to use 3D maps of the terrain ahead of the spraying booms based on laser range-fi?nder measurements combined with GPS-based localisation. Four different boom trajectory planning solutions which utilise the 3D maps are considered and their accuracy and real-time suitability is evaluated based on data collected from ?field tests. The point optimisation and interpolation technique presents a practical solution demonstrating satisfactory performance under real-time constraints.

    @inproceedings{lirolem14603,
           booktitle = {International Workshop on Recent Advances in Agricultural Robotics},
               month = {July},
               title = {On-line trajectory planning for autonomous spraying vehicles},
              author = {Pablo Urcola and Tom Duckett and Grzegorz Cielniak},
                year = {2014},
            keywords = {ARRAY(0x7fdc78152f98)},
                 url = {http://eprints.lincoln.ac.uk/14603/},
            abstract = {In this paper, we present a new application of on-line trajectory planning for autonomous sprayers. The current generation of these vehicles use automatic controllers to maintain the height of the spraying booms above the crop. However, such systems are typically based on ultrasonic sensors mounted directly on the booms, which limits the
    response of the controller to changes in the terrain, resulting in a suboptimal spraying process. To overcome these limitations, we propose to use 3D maps of the terrain ahead of the spraying booms based on laser range-fi?nder measurements combined with GPS-based localisation. Four different boom trajectory planning solutions which utilise the 3D maps are considered and their accuracy and real-time suitability is evaluated based on data collected from ?field tests. The point optimisation and interpolation technique presents a practical solution demonstrating satisfactory performance under real-time constraints.}
    }
  • J. Xu, R. Wang, and S. Yue, “Bio-inspired classifier for road extraction from remote sensing imagery,” Journal of Applied Remote Sensing, vol. 8, iss. 1, p. 83577, 2014.
    [BibTeX] [Abstract] [EPrints]

    An adaptive approach for road extraction inspired by the mechanism of primary visual cortex (V1) is proposed. The motivation is originated by the characteristics in the receptive field from V1. It has been proved that human or primate visual systems can distinguish useful cues from real scenes effortlessly while traditional computer vision techniques cannot accomplish this task easily. This idea motivates us to design a bio-inspired model for road extraction from remote sensing imagery. The proposed approach is an improved support vector machine (SVM) based on the pooling of feature vectors, using an improved Gaussian radial basis function (RBF) kernel with tuning on synaptic gains. The synaptic gains comprise the feature vectors through an iterative optimization process representing the strength and width of Gaussian RBF kernel. The synaptic gains integrate the excitation and inhibition stimuli based on internal connections from V1. The summation of synaptic gains contributes to pooling of feature vectors. The experimental results verify the correlation between the synaptic gain and classification rules, and then show better performance in comparison with hidden Markov model, SVM, and fuzzy classification approaches. Our contribution is an automatic approach to road extraction without prelabeling and postprocessing work. Another apparent advantage is that our method is robust for images taken even under complex weather conditions such as snowy and foggy weather. Â\copyright 2014 SPIE.

    @article{lirolem14764,
              volume = {8},
              number = {1},
               month = {August},
              author = {Jiawei Xu and Ruisheng Wang and Shigang Yue},
               title = {Bio-inspired classifier for road extraction from remote sensing imagery},
           publisher = {Society of Photo-optical Instrumentation Engineers (SPIE)},
                year = {2014},
             journal = {Journal of Applied Remote Sensing},
               pages = {083577},
            keywords = {ARRAY(0x7fdc780a9ec0)},
                 url = {http://eprints.lincoln.ac.uk/14764/},
            abstract = {An adaptive approach for road extraction inspired by the mechanism of primary visual cortex (V1) is proposed. The motivation is originated by the characteristics in the receptive field from V1. It has been proved that human or primate visual systems can distinguish useful cues from real scenes effortlessly while traditional computer vision techniques cannot accomplish this task easily. This idea motivates us to design a bio-inspired model for road extraction from remote sensing imagery. The proposed approach is an improved support vector machine (SVM) based on the pooling of feature vectors, using an improved Gaussian radial basis function (RBF) kernel with tuning on synaptic gains. The synaptic gains comprise the feature vectors through an iterative optimization process representing the strength and width of Gaussian RBF kernel. The synaptic gains integrate the excitation and inhibition stimuli based on internal connections from V1. The summation of synaptic gains contributes to pooling of feature vectors. The experimental results verify the correlation between the synaptic gain and classification rules, and then show better performance in comparison with hidden Markov model, SVM, and fuzzy classification approaches. Our contribution is an automatic approach to road extraction without prelabeling and postprocessing work. Another apparent advantage is that our method is robust for images taken even under complex weather conditions such as snowy and foggy weather. {\^A}{\copyright} 2014 SPIE.}
    }
  • J. Xu and S. Yue, “Mimicking visual searching with integrated top down cues and low-level features,” Neurocomputing, vol. 133, pp. 1-17, 2014.
    [BibTeX] [Abstract] [EPrints]

    Visual searching is a perception task involved with visual attention, attention shift and active scan of the visual environment for a particular object or feature. The key idea of our paper is to mimic the human visual searching under the static and dynamic scenes. To build up an artificial vision system that performs the visual searching could be helpful to medical and psychological application development to human machine interaction. Recent state-of-the-art researches focus on the bottom-up and top-down saliency maps. Saliency maps indicate that the saliency likelihood of each pixel, however, understanding the visual searching process can help an artificial vision system exam details in a way similar to human and they will be good for future robots or machine vision systems which is a deeper digest than the saliency map. This paper proposed a computational model trying to mimic human visual searching process and we emphasis the motion cues on the visual processing and searching. Our model analysis the attention shifts by fusing the top-down bias and bottom-up cues. This model also takes account the motion factor into the visual searching processing. The proposed model involves five modules: the pre-learning process; top-down biasing; bottom-up mechanism; multi-layer neural network and attention shifts. Experiment evaluation results via benchmark databases and real-time video showed the model demonstrated high robustness and real-time ability under complex dynamic scenes.

    @article{lirolem13453,
              volume = {133},
               month = {June},
              author = {Jiawei Xu and Shigang Yue},
               title = {Mimicking visual searching with integrated top down cues and low-level features},
           publisher = {Elsevier},
             journal = {Neurocomputing},
               pages = {1--17},
                year = {2014},
            keywords = {ARRAY(0x7fdc781a7aa0)},
                 url = {http://eprints.lincoln.ac.uk/13453/},
            abstract = {Visual searching is a perception task involved with visual attention, attention shift and active scan of the visual environment for a particular object or feature. The key idea of our paper is to mimic the human visual searching under the static and dynamic scenes. To build up an artificial vision system that performs the visual searching could be helpful to medical and psychological application development to human machine interaction. Recent state-of-the-art researches focus on the bottom-up and top-down saliency maps. Saliency maps indicate that the saliency likelihood of each pixel, however, understanding the visual searching process can help an artificial vision system exam details in a way similar to human and they will be good for future robots or machine vision systems which is a deeper digest than the saliency map. This paper proposed a computational model trying to mimic human visual searching process and we emphasis the motion cues on the visual processing and searching. Our model analysis the attention shifts by fusing the top-down bias and bottom-up cues. This model also takes account the motion factor into the visual searching processing. The proposed model involves five modules: the pre-learning process; top-down biasing; bottom-up mechanism; multi-layer neural network and attention shifts. Experiment evaluation results via benchmark databases and real-time video showed the model demonstrated high robustness and real-time ability under complex dynamic scenes.}
    }
  • S. Yue, K. Harmer, K. Guo, K. Adams, and A. Hunter, “Automatic blush detection in ?concealed information? test using visual stimuli,” International Journal of Data Mining, Modelling and Management, vol. 6, iss. 2, pp. 187-201, 2014.
    [BibTeX] [Abstract] [EPrints]

    Blushing has been identified as an indicator of deception, shame, anxiety and embarrassment. Although normally associated with the skin coloration of the face, a blush response also affects skin surface temperature. In this paper, an approach to detect a blush response automatically is presented using the Argus P7225 thermal camera from e2v. The algorithm was tested on a sample population of 51 subjects, while using visual stimuli to elicit a response, and achieved recognition rates of \texttt\char12677\% TPR and \texttt\char12660\% TNR, indicating a thermal image sensor is the prospective device to pick up subtle temperature change synchronised with stimuli.

    @article{lirolem14660,
              volume = {6},
              number = {2},
               month = {June},
              author = {Shigang Yue and Karl Harmer and Kun Guo and Karen Adams and Andrew Hunter},
               title = {Automatic blush detection in ?concealed information? test using visual stimuli},
           publisher = {Inderscience},
                year = {2014},
             journal = {International Journal of Data Mining, Modelling and Management},
               pages = {187--201},
            keywords = {ARRAY(0x7fdc780aa1d8)},
                 url = {http://eprints.lincoln.ac.uk/14660/},
            abstract = {Blushing has been identified as an indicator of deception, shame, anxiety and embarrassment. Although normally associated with the skin coloration of the face, a blush response also affects skin surface temperature. In this paper, an approach to detect a blush response automatically is presented using the Argus P7225 thermal camera from e2v. The algorithm was tested on a sample population of 51 subjects, while using visual stimuli to elicit a response,  and achieved recognition rates of {\texttt{\char126}}77\% TPR and {\texttt{\char126}}60\% TNR, indicating a thermal image sensor is the prospective device to pick up subtle temperature change synchronised with stimuli.}
    }
  • G. Zahi and S. Yue, “Reducing motion blurring associated with temporal summation in low light scenes for image quality enhancement,” in International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014, 2014, pp. 1-5.
    [BibTeX] [Abstract] [EPrints]

    In order to see under low light conditions nocturnal insects rely on neural strategies based on combinations of spatial and temporal summations. Though these summation techniques when modelled are effective in improving the quality of low light images, using the temporal summation in scenes where image velocity is high only come at a cost of motion blurring in the output scenes. Most recent research has been towards reducing motion blurring in scenes where motion is caused by moving objects rather than effectively reducing motion blurring in scenes where motion is caused by moving cameras. This makes it impossible to implement the night vision algorithm in moving robots or cars that operate under low light conditions. In this paper we present a generic new method that can replace the normal temporal summation in scenes where motion is detected. The proposed method is both suitable for motion caused by moving objects as well as moving cameras. The effectiveness of this new generic method is shown with relevant supporting experiments.

    @inproceedings{lirolem16637,
           booktitle = {International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014},
               title = {Reducing motion blurring associated with temporal summation in low light scenes for image quality enhancement},
              author = {Gabriel Zahi and Shigang Yue},
           publisher = {Institute of Electrical and Electronics Engineers Inc.},
                year = {2014},
               pages = {1--5},
             journal = {Processing of 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, MFI 2014},
            keywords = {ARRAY(0x7fdc78190a18)},
                 url = {http://eprints.lincoln.ac.uk/16637/},
            abstract = {In order to see under low light conditions nocturnal insects rely on neural strategies based on combinations of spatial and temporal summations. Though these summation techniques when modelled are effective in improving the quality of low light images, using the temporal summation in scenes where image velocity is high only come at a cost of motion blurring in the output scenes. Most recent research has been towards reducing motion blurring in scenes where motion is caused by moving objects rather than effectively reducing motion blurring in scenes where motion is caused by moving cameras. This makes it impossible to implement the night vision algorithm in moving robots or cars that operate under low light conditions. In this paper we present a generic new method that can replace the normal temporal summation in scenes where motion is detected. The proposed method is both suitable for motion caused by moving objects as well as moving cameras. The effectiveness of this new generic method is shown with relevant supporting experiments.}
    }
  • Z. Zhang, S. Yue, M. Liao, and F. Long, “Danger theory based artificial immune system solving dynamic constrained single-objective optimization,” Soft Computing, vol. 18, iss. 1, pp. 185-206, 2014.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we propose an artificial immune system (AIS) based on the danger theory in immunology for solving dynamic nonlinear constrained single-objective optimization problems with time-dependent design spaces. Such proposed AIS executes orderly three modules-danger detection, immune evolution and memory update. The first module identifies whether there are changes in the optimization environment and decides the environmental level, which helps for creating the initial population in the environment and promoting the process of solution search. The second module runs a loop of optimization, in which three sub-populations each with a dynamic size seek simultaneously the location of the optimal solution along different directions through co-evolution. The last module stores and updates the memory cells which help the first module decide the environmental level. This optimization system is an on-line and adaptive one with the characteristics of simplicity, modularization and co-evolution. The numerical experiments and the results acquired by the nonparametric statistic procedures, based on 22 benchmark problems and an engineering problem, show that the proposed approach performs globally well over the compared algorithms and is of potential use for many kinds of dynamic optimization problems. Â\copyright 2013 Springer-Verlag Berlin Heidelberg.

    @article{lirolem11410,
              volume = {18},
              number = {1},
               month = {January},
              author = {Zhuhong Zhang and Shigang Yue and Min Liao and Fei Long},
               title = {Danger theory based artificial immune system solving dynamic constrained single-objective optimization},
           publisher = {Springer Verlag (Germany)},
                year = {2014},
             journal = {Soft Computing},
               pages = {185--206},
            keywords = {ARRAY(0x7fdc78086988)},
                 url = {http://eprints.lincoln.ac.uk/11410/},
            abstract = {In this paper, we propose an artificial immune system (AIS) based on the danger theory in immunology for solving dynamic nonlinear constrained single-objective optimization problems with time-dependent design spaces. Such proposed AIS executes orderly three modules-danger detection, immune evolution and memory update. The first module identifies whether there are changes in the optimization environment and decides the environmental level, which helps for creating the initial population in the environment and promoting the process of solution search. The second module runs a loop of optimization, in which three sub-populations each with a dynamic size seek simultaneously the location of the optimal solution along different directions through co-evolution. The last module stores and updates the memory cells which help the first module decide the environmental level. This optimization system is an on-line and adaptive one with the characteristics of simplicity, modularization and co-evolution. The numerical experiments and the results acquired by the nonparametric statistic procedures, based on 22 benchmark problems and an engineering problem, show that the proposed approach performs globally well over the compared algorithms and is of potential use for many kinds of dynamic optimization problems. {\^A}{\copyright} 2013 Springer-Verlag Berlin Heidelberg.}
    }

2013

  • F. Arvin and M. Bekravi, “Encoderless position estimation and error correction techniques for miniature mobile robots,” Turkish Journal of Electrical Engineering & Computer Sciences, vol. 21, iss. 6, pp. 1631-1645, 2013.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an encoderless position estimation technique for miniature-sized mobile robots. Odometry techniques, which are based on the hardware components, are commonly used for calculating the geometric location of mobile robots. Therefore, the robot must be equipped with an appropriate sensor to measure the motion. However, due to the hardware limitations of some robots, employing extra hardware is impossible. On the other hand, in swarm robotic research, which uses a large number of mobile robots, equipping the robots with motion sensors might be costly. In this study, the trajectory of the robot is divided into several small displacements over short spans of time. Therefore, the position of the robot is calculated within a short period, using the speed equations of the robot’s wheel. In addition, an error correction function is proposed that estimates the errors of the motion using a current monitoring technique. The experiments illustrate the feasibility of the proposed position estimation and error correction techniques to be used in miniature-sized mobile robots without requiring an additional sensor.

    @article{lirolem12078,
              volume = {21},
              number = {6},
               month = {October},
              author = {Farshad Arvin and Masoud Bekravi},
               title = {Encoderless position estimation and error correction techniques for miniature mobile robots},
           publisher = {Scientific and Technical Research Council of Turkey},
                year = {2013},
             journal = {Turkish Journal of Electrical Engineering \& Computer Sciences},
               pages = {1631--1645},
            keywords = {ARRAY(0x7fdc78086640)},
                 url = {http://eprints.lincoln.ac.uk/12078/},
            abstract = { This paper presents an encoderless position estimation technique for miniature-sized mobile robots. Odometry techniques, which are based on the hardware components, are commonly used for calculating the geometric location of mobile robots. Therefore, the robot must be equipped with an appropriate sensor to measure the motion. However, due to the hardware limitations of some robots, employing extra hardware is impossible. On the other hand, in swarm robotic research, which uses a large number of mobile robots, equipping the robots with motion sensors might be costly. In this study, the trajectory of the robot is divided into several small displacements over short spans of time. Therefore, the position of the robot is calculated within a short period, using the speed equations of the robot's wheel. In addition, an error correction function is proposed that estimates the errors of the motion using a current monitoring technique. The experiments illustrate the feasibility of the proposed position estimation and error correction techniques to be used in miniature-sized mobile robots without requiring an additional sensor. }
    }
  • P. E. Baxter, J. de Greeff, and T. Belpaeme, “Cognitive architecture for human?robot interaction: towards behavioural alignment,” Biologically Inspired Cognitive Architectures, vol. 6, pp. 30-39, 2013.
    [BibTeX] [Abstract] [EPrints]

    Abstract With increasingly competent robotic systems desired and required for social human?robot interaction comes the necessity for more complex means of control. Cognitive architectures (specifically the perspective where principles of structure and function are sought to account for multiple cognitive competencies) have only relatively recently been considered for applica- tion to this domain. In this paper, we describe one such set of architectural principles ? acti- vation dynamics over a developmental distributed associative substrate ? and show how this enables an account of a fundamental competence for social cognition: multi-modal behavioural alignment. Data from real human?robot interactions is modelled using a computational system based on this set of principles to demonstrate how this competence can therefore be consid- ered as embedded in wider cognitive processing. It is shown that the proposed system can model the behavioural characteristics of human subjects. While this study is a simulation using real interaction data, the results obtained validate the application of the proposed approach to this issue.

    @article{lirolem23076,
              volume = {6},
               month = {October},
              author = {Paul E. Baxter and Joachim de Greeff and Tony Belpaeme},
               title = {Cognitive architecture for human?robot interaction: towards behavioural alignment},
           publisher = {Elsevier B.V.},
             journal = {Biologically Inspired Cognitive Architectures},
               pages = {30--39},
                year = {2013},
            keywords = {ARRAY(0x7fdc7816b738)},
                 url = {http://eprints.lincoln.ac.uk/23076/},
            abstract = {Abstract With increasingly competent robotic systems desired and required for social human?robot interaction comes the necessity for more complex means of control. Cognitive architectures (specifically the perspective where principles of structure and function are sought to account for multiple cognitive competencies) have only relatively recently been considered for applica- tion to this domain. In this paper, we describe one such set of architectural principles ? acti- vation dynamics over a developmental distributed associative substrate ? and show how this enables an account of a fundamental competence for social cognition: multi-modal behavioural alignment. Data from real human?robot interactions is modelled using a computational system based on this set of principles to demonstrate how this competence can therefore be consid- ered as embedded in wider cognitive processing. It is shown that the proposed system can model the behavioural characteristics of human subjects. While this study is a simulation using real interaction data, the results obtained validate the application of the proposed approach to this issue.}
    }
  • P. Baxter, J. D. Greeff, R. Wood, and T. Belpaeme, “Modelling concept prototype competencies using a developmental memory model,” Paladyn, Journal of Behavioral Robotics, vol. 3, iss. 4, pp. 200-208, 2013.
    [BibTeX] [Abstract] [EPrints]

    The use of concepts is fundamental to human-level cognition, but there remain a number of open questions as to the structures supporting this competence. Specifically, it has been shown that humans use concept prototypes, a flexible means of representing concepts such that it can be used both for categorisation and for similarity judgements. In the context of autonomous robotic agents, the processes by which such concept functionality could be acquired would be particularly useful, enabling flexible knowledge representation and application. This paper seeks to explore this issue of autonomous concept acquisition. By applying a set of structural and operational principles, that support a wide range of cognitive competencies, within a developmental framework, the intention is to explicitly embed the development of concepts into a wider framework of cognitive processing. Comparison with a benchmark concept modelling system shows that the proposed approach can account for a number of features, namely concept-based classification, and its extension to prototype-like functionality.

    @article{lirolem23077,
              volume = {3},
              number = {4},
               month = {April},
              author = {Paul Baxter and Joachim De Greeff and Rachel Wood and Tony Belpaeme},
                note = {Issue cover date: December 2012},
               title = {Modelling concept prototype competencies using a developmental memory model},
           publisher = {De Gruyter/Springer},
                year = {2013},
             journal = {Paladyn, Journal of Behavioral Robotics},
               pages = {200--208},
            keywords = {ARRAY(0x7fdc7819ded0)},
                 url = {http://eprints.lincoln.ac.uk/23077/},
            abstract = {The use of concepts is fundamental to human-level cognition, but there remain a number of open questions as to the structures supporting this competence. Specifically, it has been shown that humans use concept prototypes, a flexible means of representing concepts such that it can be used both for categorisation and for similarity judgements. In the context of autonomous robotic agents, the processes by which such concept functionality could be acquired would be particularly useful, enabling flexible knowledge representation and application. This paper seeks to explore this issue of autonomous concept acquisition. By applying a set of structural and operational principles, that support a wide range of cognitive competencies, within a developmental framework, the intention is to explicitly embed the development of concepts into a wider framework of cognitive processing. Comparison with a benchmark concept modelling system shows that the proposed approach can account for a number of features, namely concept-based classification, and its extension to prototype-like functionality.}
    }
  • N. Bellotto, “A multimodal smartphone interface for active perception by visually impaired,” in IEEE SMC Int. Workshop on Human-Machine Systems, Cyborgs and Enhancing Devices (HUMASCEND), 2013.
    [BibTeX] [Abstract] [EPrints]

    The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone’s camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people.

    @inproceedings{lirolem11636,
           booktitle = {IEEE SMC Int. Workshop on Human-Machine Systems, Cyborgs and Enhancing Devices (HUMASCEND)},
               month = {October},
               title = {A multimodal smartphone interface for active perception by visually impaired},
              author = {Nicola Bellotto},
           publisher = {IEEE},
                year = {2013},
            keywords = {ARRAY(0x7fdc78196830)},
                 url = {http://eprints.lincoln.ac.uk/11636/},
            abstract = {The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone's camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people.}
    }
  • N. Bellotto, M. Hanheide, and N. V. de Weghe, “Qualitative design and implementation of human-robot spatial interactions,” in International Conference on Social Robotics (ICSR), 2013.
    [BibTeX] [Abstract] [EPrints]

    Despite the large number of navigation algorithms available for mobile robots, in many social contexts they often exhibit inopportune motion behaviours in proximity of people, often with very "unnatural" movements due to the execution of segmented trajectories or the sudden activation of safety mechanisms (e.g., for obstacle avoidance). We argue that the reason of the problem is not only the difficulty of modelling human behaviours and generating opportune robot control policies, but also the way human-robot spatial interactions are represented and implemented. In this paper we propose a new methodology based on a qualitative representation of spatial interactions, which is both flexible and compact, adopting the well-defined and coherent formalization of Qualitative Trajectory Calculus (QTC). We show the potential of a QTC-based approach to abstract and design complex robot behaviours, where the desired robot’s behaviour is represented together with its actual performance in one coherent approach, focusing on spatial interactions rather than pure navigation problems.

    @inproceedings{lirolem11637,
           booktitle = {International Conference on Social Robotics (ICSR)},
               month = {October},
               title = {Qualitative design and implementation of human-robot spatial interactions},
              author = {Nicola Bellotto and Marc Hanheide and Nico Van de Weghe},
           publisher = {Springer},
                year = {2013},
            keywords = {ARRAY(0x7fdc781555a0)},
                 url = {http://eprints.lincoln.ac.uk/11637/},
            abstract = {Despite the large number of navigation algorithms available for mobile robots, in many social contexts they often exhibit inopportune motion behaviours in proximity of people, often with very "unnatural" movements due to the execution of segmented trajectories or the sudden activation of safety mechanisms (e.g., for obstacle avoidance). We argue that the reason of the problem is not only the difficulty of modelling human behaviours and generating opportune robot control policies, but also the way human-robot spatial interactions are represented and implemented.
    In this paper we propose a new methodology based on a qualitative representation of spatial interactions, which is both flexible and compact, adopting the well-defined and coherent formalization of Qualitative Trajectory Calculus (QTC). We show the potential of a QTC-based approach to abstract and design complex robot behaviours, where the desired robot's behaviour is represented together with its actual performance in one coherent approach, focusing on spatial interactions rather than pure navigation problems.}
    }
  • C. Cherino, G. Cielniak, P. Dickinson, and P. Geril, “FUBUTEC-ECEC’2013,” , 2013.
    [BibTeX] [Abstract] [EPrints]

    This edition covers Risk Management, Management Techniques, Production Design Optimization and Video Applications

    @manual{lirolem22903,
               month = {May},
                type = {Documentation},
               title = {FUBUTEC-ECEC'2013},
              author = {Cristina Cherino and Grzegorz Cielniak and Patrick Dickinson and Philippe Geril},
           publisher = {EUROSIS-ETI BVBA},
                year = {2013},
                note = {FUBUTEC'2013, Future Business Technology Conference, June 10-12, 2013, University of Lincoln, Lincoln, UK},
            keywords = {ARRAY(0x7fdc781af140)},
                 url = {http://eprints.lincoln.ac.uk/22903/},
            abstract = {This edition covers Risk Management, Management Techniques, Production Design Optimization and Video Applications}
    }
  • G. Cielniak, N. Bellotto, and T. Duckett, “Integrating mobile robotics and vision with undergraduate computer science,” IEEE Transactions on Education, vol. 56, iss. 1, pp. 48-53, 2013.
    [BibTeX] [Abstract] [EPrints]

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision, and is directly linked to the research conducted at the authors? institution. The paper describes the most relevant details of the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile robots. The specific choices are discussed that were made with regard to the mobile platform, software libraries and lab environment. The paper also presents a detailed qualitative and quantitative analysis of student results, including the correlation between student engagement and performance, and discusses the outcomes of this experience.

    @article{lirolem6031,
              volume = {56},
              number = {1},
               month = {February},
              author = {Grzegorz Cielniak and Nicola Bellotto and Tom Duckett},
               title = {Integrating mobile robotics and vision with undergraduate computer science},
           publisher = {The IEEE Education Society},
                year = {2013},
             journal = {IEEE Transactions on Education},
               pages = {48--53},
            keywords = {ARRAY(0x7fdc78086400)},
                 url = {http://eprints.lincoln.ac.uk/6031/},
            abstract = {This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision, and is directly linked to the research conducted at the authors? institution. The paper describes the most relevant details of the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile robots. The specific choices are discussed that were made with regard to the mobile platform, software libraries and lab environment. The paper also presents a detailed qualitative and quantitative analysis of student results, including the correlation between student engagement and performance, and discusses the outcomes of this experience.
    }
    }
  • T. Duckett and A. Lilienthal, “Editorial,” Robotics and Autonomous Systems, vol. 61, iss. 10, pp. 1049-1050, 2013.
    [BibTeX] [Abstract] [EPrints]

    .

    @article{lirolem12768,
              volume = {61},
              number = {10},
               month = {October},
              author = {Tom Duckett and Achim Lilienthal},
                note = {Selected Papers from the 5th European Conference on Mobile Robots (ECMR 2011)},
               title = {Editorial},
           publisher = {Elsevier for North-Holland / Intelligent Autonomous Systems (IAS) Society},
                year = {2013},
             journal = {Robotics and Autonomous Systems},
               pages = {1049--1050},
            keywords = {ARRAY(0x7fdc7816c668)},
                 url = {http://eprints.lincoln.ac.uk/12768/},
            abstract = {.}
    }
  • T. Duckett, M. Hanheide, T. Krajnik, J. P. Fentanes, and C. Dondrup, “Spatio-temporal representation for cognitive control in long-term scenarios,” in International IEEE/EPSRC Workshop on Autonomous Cognitive Robotics, 2013.
    [BibTeX] [Abstract] [EPrints]

    The FP-7 Integrated Project STRANDS [1] is aimed at producing intelligent mobile robots that are able to operate robustly for months in dynamic human environments. To achieve long-term autonomy, the robots would need to understand the environment and how it changes over time. For that, we will have to develop novel approaches to extract 3D shapes, objects, people, and models of activity from sensor data gathered during months of autonomous operation. So far, the environment models used in mobile robotics have been tailored to capture static scenes and environment variations are largely treated as noise. Therefore, utilization of the static models in ever-changing, real world environments is difficult. We propose to represent the environment?s spatio-temporal dynamics by its frequency spectrum.

    @inproceedings{lirolem14893,
           booktitle = {International IEEE/EPSRC Workshop on Autonomous Cognitive Robotics},
               month = {March},
               title = {Spatio-temporal representation for cognitive control in long-term scenarios},
              author = {Tom Duckett and Marc Hanheide and Tomas Krajnik and Jaime Pulido Fentanes and Christian Dondrup},
                year = {2013},
            keywords = {ARRAY(0x7fdc78193b10)},
                 url = {http://eprints.lincoln.ac.uk/14893/},
            abstract = {The FP-7 Integrated Project STRANDS [1] is aimed at producing intelligent mobile robots that are able to operate robustly for months in dynamic human environments. To achieve long-term autonomy, the robots would need to understand the environment and how it changes over time. For that, we will have to develop novel approaches to extract 3D shapes, objects, people, and models of activity from sensor data gathered during months of autonomous operation.
    So far, the environment models used in mobile robotics have been tailored to capture static scenes and environment variations are largely treated as noise. Therefore, utilization of the static models in ever-changing, real world environments is difficult. We propose to represent the environment?s spatio-temporal dynamics by its frequency spectrum.}
    }
  • T. Krajnik, M. Nitsche, J. Faigl, M. Mejail, L. Preucil, and T. Duckett, “External localization system for mobile robotics,” in 16th International Conference on Advanced Robotics (ICAR 2013), 2013.
    [BibTeX] [Abstract] [EPrints]

    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera?s intrinsic parameters and hardware?s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems.

    @inproceedings{lirolem12670,
           booktitle = {16th International Conference on Advanced Robotics (ICAR 2013)},
               month = {November},
               title = {External localization system for mobile robotics},
              author = {Tomas Krajnik and Matias Nitsche  and Jan Faigl and Marta Mejail and Libor Preucil and Tom Duckett},
           publisher = {IEEE},
                year = {2013},
             journal = {International Conference on Advanced Robotics, ICAR 2013 (Proceedings)},
            keywords = {ARRAY(0x7fdc78189638)},
                 url = {http://eprints.lincoln.ac.uk/12670/},
            abstract = {We present a fast and precise vision-based software intended for multiple robot localization. The core component of
    the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera?s intrinsic parameters and hardware?s processing capacity. The correctness of the presented model and
    performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems.}
    }
  • C. Lang, S. Wachsmuth, M. Hanheide, and H. Wersing, “Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection,” in IEEE International Conference on Robotics and Automation (ICRA) , Karlsruhe, 2013, pp. 170-177.
    [BibTeX] [Abstract] [EPrints]

    Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in human-robot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task. Â\copyright 2013 IEEE.

    @inproceedings{lirolem13775,
               month = {May},
              author = {C. Lang and S. Wachsmuth and M. Hanheide and H. Wersing},
                note = { Conference Code:100673},
           booktitle = {IEEE International Conference on Robotics and Automation (ICRA) },
             address = {Karlsruhe},
               title = {Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection},
           publisher = {IEEE},
                year = {2013},
               pages = {170--177},
            keywords = {ARRAY(0x7fdc781bd068)},
                 url = {http://eprints.lincoln.ac.uk/13775/},
            abstract = {Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in human-robot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task. {\^A}{\copyright} 2013 IEEE.}
    }
  • C. Lang, S. Wachsmuth, M. Hanheide, and H. Wersing, “Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection,” in International Conference on Robotics and Automation (ICRA), 2013, pp. 170-177.
    [BibTeX] [Abstract] [EPrints]

    Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classi?cation procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classi?cation, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classi?ed in this work. The achieved classi?cation accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.

    @inproceedings{lirolem7880,
               month = {May},
              author = {Christian Lang and Sven Wachsmuth and Marc Hanheide and Heiko Wersing},
                note = {Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classi?cation procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classi?cation, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classi?ed in this work. The achieved classi?cation accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.},
           booktitle = {International Conference on Robotics and Automation (ICRA)},
               title = {Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection},
           publisher = {IEEE},
               pages = {170--177},
                year = {2013},
            keywords = {ARRAY(0x7fdc7803f380)},
                 url = {http://eprints.lincoln.ac.uk/7880/},
            abstract = {Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classi?cation procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classi?cation, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classi?ed in this work. The achieved classi?cation accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.}
    }
  • C. Lang, S. Wachsmuth, M. Hanheide, and H. Wersing, “Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection,” in IEEE International Conference on Robotics and Automation, ICRA 2013, Karlsruhe, 2013, pp. 170-177.
    [BibTeX] [Abstract] [EPrints]

    Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in human-robot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task. Â\copyright 2013 IEEE.

    @inproceedings{lirolem13462,
              author = {C. Lang and S. Wachsmuth and M. Hanheide and H. Wersing},
                note = {Conference Code:100673},
           booktitle = {IEEE International Conference on Robotics and Automation, ICRA 2013},
             address = {Karlsruhe},
               title = {Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection},
           publisher = {IEEE},
               pages = {170--177},
                year = {2013},
            keywords = {ARRAY(0x7fdc78082278)},
                 url = {http://eprints.lincoln.ac.uk/13462/},
            abstract = {Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in human-robot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task. {\^A}{\copyright} 2013 IEEE.}
    }
  • F. Moreno, G. Cielniak, and T. Duckett, “Evaluation of laser range-finder mapping for agricultural spraying vehicles,” in Towards Autonomous Robotic Systems, 2013, pp. 210-221.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we present a new application of laser range-finder sensing to agricultural spraying vehicles. The current generation of spraying vehicles use automatic controllers to maintain the height of the sprayer booms above the crop. However, these control systems are typically based on ultrasonic sensors mounted on the booms, which limits the accuracy of the measurements and the response of the controller to changes in the terrain, resulting in a sub-optimal spraying process. To overcome these limitations, we propose to use a laser scanner, attached to the front of the sprayer’s cabin, to scan the ground surface in front of the vehicle and to build a scrolling 3d map of the terrain. We evaluate the proposed solution in a series of field tests, demonstrating that the approach provides a more detailed and accurate representation of the environment than the current sonar-based solution, and which can lead to the development of more efficient boom control systems.

    @inproceedings{lirolem11330,
           booktitle = {Towards Autonomous Robotic Systems},
               month = {August},
               title = {Evaluation of laser range-finder mapping for agricultural spraying vehicles},
              author = {Francisco-Angel Moreno and Grzegorz Cielniak and Tom Duckett},
                year = {2013},
               pages = {210--221},
            keywords = {ARRAY(0x7fdc780820b0)},
                 url = {http://eprints.lincoln.ac.uk/11330/},
            abstract = {In this paper, we present a new application of laser range-finder sensing to agricultural spraying vehicles. The current generation of spraying vehicles use automatic controllers to maintain the height of the sprayer booms above the crop.
    However, these control systems are typically based on ultrasonic sensors mounted on the booms, which limits the accuracy of the measurements and the response of the controller to changes in the terrain, resulting in a sub-optimal spraying process. To overcome these limitations, we propose to use a laser scanner, attached to the front of the sprayer's cabin, to scan the ground surface in front of the vehicle and to build a scrolling 3d map of the terrain. We evaluate the proposed solution in a series of field tests, demonstrating that the approach provides a more detailed and accurate representation of the environment than the current sonar-based solution, and which can lead to the development of more efficient boom control systems.}
    }
  • P. S. Teh, A. B. J. Teoh, and S. Yue, “A survey of keystroke dynamics biometrics,” The Scientific World Journal, vol. 2013, p. 408280, 2013.
    [BibTeX] [Abstract] [EPrints]

    Research on keystroke dynamics biometrics has been increasing, especially in the last decade. The main motivation behind this effort is due to the fact that keystroke dynamics biometrics is economical and can be easily integrated into the existing computer security systems with minimal alteration and user intervention. Numerous studies have been conducted in terms of data acquisition devices, feature representations, classification methods, experimental protocols, and evaluations. However, an up-to-date extensive survey and evaluation is not yet available. The objective of this paper is to provide an insightful survey and comparison on keystroke dynamics biometrics research performed throughout the last three decades, as well as offering suggestions and possible future research directions.

    @article{lirolem12817,
              volume = {2013},
               month = {December},
              author = {Pin Shen Teh and Andrew Beng Jin Teoh and Shigang Yue},
               title = {A survey of keystroke dynamics biometrics},
           publisher = {Hindawi Publishing Corporation / Scientific World},
             journal = {The Scientific World Journal},
               pages = {408280},
                year = {2013},
            keywords = {ARRAY(0x7fdc78086700)},
                 url = {http://eprints.lincoln.ac.uk/12817/},
            abstract = {Research on keystroke dynamics biometrics has been increasing, especially in the last decade. The main motivation behind this effort is due to the fact that keystroke dynamics biometrics is economical and can be easily integrated into the existing computer security systems with minimal alteration and user intervention. Numerous studies have been conducted in terms of data acquisition devices, feature representations, classification methods, experimental protocols, and evaluations. However, an up-to-date extensive survey and evaluation is not yet available. The objective of this paper is to provide an insightful survey and comparison on keystroke dynamics biometrics research performed throughout the last three decades, as well as offering suggestions and possible future research directions.}
    }
  • A. Wystrach, M. Mangan, A. Philippides, and P. Graham, “Snapshots in ants? New interpretations of paradigmatic experiments,” Journal of Experimental Biology, vol. 216, iss. 10, pp. 1766-1770, 2013.
    [BibTeX] [Abstract] [EPrints]

    Ants can use visual information to guide long idiosyncratic routes and accurately pinpoint locations in complex natural environments. It has often been assumed that the world knowledge of these foragers consists of multiple discrete views that are retrieved sequentially for breaking routes into sections controlling approaches to a goal. Here we challenge this idea using a model of visual navigation that does not store and use discrete views to replicate the results from paradigmatic experiments that have been taken as evidence that ants navigate using such discrete snapshots. Instead of sequentially retrieving views, the proposed architecture gathers information from all experienced views into a single memory network, and uses this network all along the route to determine the most familiar heading at a given location. This algorithm is consistent with the navigation of ants in both laboratory and natural environments, and provides a parsimonious solution to deal with visual information from multiple locations.

    @article{lirolem23579,
              volume = {216},
              number = {10},
               month = {May},
              author = {Antoine Wystrach and Michael Mangan and Andrew Philippides and Paul Graham},
               title = {Snapshots in ants? New interpretations of paradigmatic experiments},
           publisher = {The Company of Biologists Ltd},
                year = {2013},
             journal = {Journal of Experimental Biology},
               pages = {1766--1770},
            keywords = {ARRAY(0x7fdc78082248)},
                 url = {http://eprints.lincoln.ac.uk/23579/},
            abstract = {Ants can use visual information to guide long idiosyncratic routes and accurately pinpoint locations in complex natural environments. It has often been assumed that the world knowledge of these foragers consists of multiple discrete views that are retrieved sequentially for breaking routes into sections controlling approaches to a goal. Here we challenge this idea using a model of visual navigation that does not store and use discrete views to replicate the results from paradigmatic experiments that have been taken as evidence that ants navigate using such discrete snapshots. Instead of sequentially retrieving views, the proposed architecture gathers information from all experienced views into a single memory network, and uses this network all along the route to determine the most familiar heading at a given location. This algorithm is consistent with the navigation of ants in both laboratory and natural environments, and provides a parsimonious solution to deal with visual information from multiple locations.}
    }
  • J. Xu, S. Yue, and Y. Tang, “A motion attention model based on rarity weighting and motion cues in dynamic scenes,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 27, iss. 06, p. 1355009, 2013.
    [BibTeX] [Abstract] [EPrints]

    Nowadays, motion attention model is a controversial topic in the biological computer vision area. The computational attention model can be decomposed into a set of features via predefined channels. Here we designed a bio-inspired vision attention model, and added the rarity measurement onto it. The priority of rarity is emphasized under the assumption of weighting effect upon the features logic fusion. At this stage, a final saliency map at each frame is adjusted by the spatiotemporal and rarity values. By doing this, the process of mimicking human vision attention becomes more realistic and logical to the real circumstance. The experiments are conducted on the benchmark dataset of static images and video sequences. We simulated the attention shift based on several dataset. Most importantly, our dynamic scenes are mostly selected from the objects moving on the highway and dynamic scenes. The former one can be developed on the detection of car collision and will be a useful tool for further application in robotics. We also conduct experiment on the other video clips to prove the rationality of rarity factor and feature cues fusion methods. Finally, the evaluation results indicate our visual attention model outperforms several state-of-the-art motion attention models. Read More: http://www.worldscientific.com/doi/abs/10.1142/S0218001413550094

    @article{lirolem13793,
              volume = {27},
              number = {06},
               month = {September},
              author = {Jiawei Xu and Shigang Yue and Yuchao Tang},
               title = {A motion attention model based on rarity weighting and motion cues in dynamic scenes},
           publisher = {World Scientific Publishing},
                year = {2013},
             journal = {International Journal of Pattern Recognition and Artificial Intelligence},
               pages = {1355009},
            keywords = {ARRAY(0x7fdc78172d20)},
                 url = {http://eprints.lincoln.ac.uk/13793/},
            abstract = {Nowadays, motion attention model is a controversial topic in the biological computer vision area. The computational attention model can be decomposed into a set of features via predefined channels. Here we designed a bio-inspired vision attention model, and added the rarity measurement onto it. The priority of rarity is emphasized under the assumption of weighting effect upon the features logic fusion. At this stage, a final saliency map at each frame is adjusted by the spatiotemporal and rarity values. By doing this, the process of mimicking human vision attention becomes more realistic and logical to the real circumstance. The experiments are conducted on the benchmark dataset of static images and video sequences. We simulated the attention shift based on several dataset. Most importantly, our dynamic scenes are mostly selected from the objects moving on the highway and dynamic scenes. The former one can be developed on the detection of car collision and will be a useful tool for further application in robotics. We also conduct experiment on the other video clips to prove the rationality of rarity factor and feature cues fusion methods. Finally, the evaluation results indicate our visual attention model outperforms several state-of-the-art motion attention models.
    
    
    Read More: http://www.worldscientific.com/doi/abs/10.1142/S0218001413550094}
    }
  • S. Yue and C. F. Rind, “Postsynaptic organizations of directional selective visual neural networks for collision detection,” Neurocomputing, vol. 103, pp. 50-62, 2013.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we studied the postsynaptic organizations of directional selective visual neurons for collision detection. Directional selective neurons can extract different directional visual motion cues fast and reliably by allowing inhibition spreads to further layers in specific directions with one or several time steps delay. Whether these directional selective neurons can be easily organised for other specific visual tasks is not known. Taking collision detection as the primary visual task, we investigated the postsynaptic organizations of these directional selective neurons through evolutionary processes. The evolved postsynaptic organizations demonstrated robust properties in detecting imminent collisions in complex visual environments with many of which achieved 94\% success rate after evolution suggesting active roles in collision detection directional selective neurons and its postsynaptic organizations can play.

    @article{lirolem9308,
              volume = {103},
               month = {March},
              author = {Shigang Yue and F. Claire Rind},
               title = {Postsynaptic organizations of directional selective visual neural networks for collision detection},
           publisher = {Elsevier Science Limited},
             journal = {Neurocomputing},
               pages = {50--62},
                year = {2013},
            keywords = {ARRAY(0x7fdc781bf530)},
                 url = {http://eprints.lincoln.ac.uk/9308/},
            abstract = {In this paper, we studied the postsynaptic organizations of directional selective visual neurons for collision detection. Directional selective neurons can extract different directional visual motion cues fast and reliably by allowing inhibition spreads to further layers in specific directions with one or several time steps delay. Whether these directional selective neurons can be easily organised for other specific visual tasks is not known. Taking collision detection as the primary visual task, we investigated the postsynaptic organizations of these directional selective neurons through evolutionary processes. The evolved postsynaptic organizations demonstrated robust properties in detecting imminent collisions in complex visual environments with many of which achieved 94\% success rate after evolution suggesting active roles in collision detection directional selective neurons and its postsynaptic organizations can play. }
    }
  • S. Yue and C. F. Rind, “Redundant neural vision systems: competing for collision recognition roles,” IEEE Transactions on Autonomous Mental Development, vol. 5, iss. 2, pp. 173-186, 2013.
    [BibTeX] [Abstract] [EPrints]

    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems ? the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition.

    @article{lirolem9307,
              volume = {5},
              number = {2},
               month = {June},
              author = {Shigang Yue and F. Claire Rind},
               title = {Redundant neural vision systems: competing for collision recognition roles},
           publisher = {IEEE / Institute of Electrical and Electronics Engineers Incorporated},
                year = {2013},
             journal = {IEEE Transactions on Autonomous Mental Development},
               pages = {173--186},
            keywords = {ARRAY(0x7fdc7802e820)},
                 url = {http://eprints.lincoln.ac.uk/9307/},
            abstract = {Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems ? the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition.}
    }
  • G. Zahi and S. Yue, “Automatic detection of low light images in a video sequence Shot under different light conditions,” in Modelling Symposium (EMS), 2013 European, 2013, pp. 271-276.
    [BibTeX] [Abstract] [EPrints]

    Nocturnal insects have the ability to neurally sum visual signals in space and time to be able to see under very low light conditions. This ability shown by nocturnal insects has inspired many researchers to develop a night vision algorithm, that is capable of significantly improving the quality and reliability of digital images captured under very low light conditions. This algorithm however when applied to day time images rather degrades their quality. It is therefore not suitable to apply the night vision algorithms equally to an image stream with different light conditions. This paper introduces a quick method of automatically determining when to apply the nocturnal vision algorithm by analysing the cumulative intensity histogram of each image in the stream. The effectiveness of this method is demonstrated with relevant experiments in a good and acceptable way.

    @inproceedings{lirolem13757,
           booktitle = {Modelling Symposium (EMS), 2013 European},
               month = {November},
               title = {Automatic detection of low light images in a video sequence Shot under different light conditions},
              author = {Gabriel Zahi and Shigang Yue},
           publisher = {IEEE},
                year = {2013},
               pages = {271--276},
            keywords = {ARRAY(0x7fdc78189f80)},
                 url = {http://eprints.lincoln.ac.uk/13757/},
            abstract = {Nocturnal insects have the ability to neurally sum visual signals in space and time to be able to see under very low light conditions. This ability shown by nocturnal insects has inspired many researchers to develop a night vision algorithm, that is capable of significantly improving the quality and reliability of digital images captured under very low light conditions. This algorithm however when applied to day time images rather degrades their quality. It is therefore not suitable to apply the night vision algorithms equally to an image stream with different light conditions. This paper introduces a quick method of automatically determining when to apply the nocturnal vision algorithm by analysing the cumulative intensity histogram of each image in the stream. The effectiveness of this method is demonstrated with relevant experiments in a good and acceptable way.}
    }
  • M. Zillich, K. Zhou, D. Skocaj, M. Kristan, A. Vrecko, M. Mahnic, M. Janicek, G. M. Kruijff, T. Keller, M. Hanheide, and N. Hawes, “Robot George: interactive continuous learning of visual concepts,” in Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, 2013, p. 425.
    [BibTeX] [Abstract] [EPrints]

    The video presents the robot George learning visual concepts in dialogue with a tutor

    @inproceedings{lirolem8365,
           booktitle = {Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction},
               month = {March},
               title = {Robot George: interactive continuous learning of visual concepts},
              author = {Michael Zillich and Kai Zhou and Danijel Skocaj and Matej Kristan and Alen Vrecko and Marko Mahnic and Miroslav Janicek and Geert-Jan M. Kruijff and Thomas Keller and Marc Hanheide and Nick Hawes},
           publisher = {IEEE Press},
                year = {2013},
               pages = {425},
            keywords = {ARRAY(0x7fdc7816d0f8)},
                 url = {http://eprints.lincoln.ac.uk/8365/},
            abstract = {The video presents the robot George learning visual concepts in dialogue with a tutor}
    }

2012

  • F. Arvin, A. E. Turgut, and S. Yue, “Fuzzy-based aggregation with a mobile robot swarm,” Lecture Notes in Computer Science, vol. 7461, pp. 346-347, 2012.
    [BibTeX] [Abstract] [EPrints]

    Aggregation is a widely observed phenomenon in social insects and animals such as cockroaches, honeybees and birds. From swarm robotics perspective [3], aggregation can be defined as gathering randomly distributed robots to form an aggregate. Honeybee aggregation is an example of cue-based aggregation method that was studied in [4]. In that study, micro robots were deployed in a gradually lighted environment to mimic the behavior of honeybees which aggregate around a zone that has the optimal temperature (BEECLUST). In our previous study [2], two modifications on BEECLUST ? dynamic velocity and comparative waiting time ? were applied to increase the performance of aggregation.

    @article{lirolem7328,
              volume = {7461},
               month = {September},
              author = {Farshad Arvin and Ali Emre Turgut and Shigang Yue},
               title = {Fuzzy-based aggregation with a mobile robot swarm},
           publisher = {Springer},
             journal = {Lecture Notes in Computer Science},
               pages = {346--347},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f4a8)},
                 url = {http://eprints.lincoln.ac.uk/7328/},
            abstract = {Aggregation is a widely observed phenomenon in social insects and animals such as cockroaches, honeybees and birds. From swarm robotics perspective [3], aggregation can be defined as gathering randomly distributed robots to form an aggregate. Honeybee aggregation is an example of cue-based aggregation method that was studied in [4]. In that study, micro robots were deployed in a gradually lighted environment to mimic the behavior of honeybees which aggregate around a zone that has the optimal temperature (BEECLUST). In our previous study [2], two modifications on BEECLUST ? dynamic velocity and comparative waiting time ? were applied to increase the performance of aggregation.}
    }
  • M. Barnes, M. Dudbridge, and T. Duckett, “Polarised light stress analysis and laser scatter imaging for non-contact inspection of heat seals in food trays,” Journal of Food Engineering, vol. 112, iss. 3, pp. 183-190, 2012.
    [BibTeX] [Abstract] [EPrints]

    This paper introduces novel non-contact methods for detecting faults in heat seals of food packages. Two alternative imaging technologies are investigated; laser scatter imaging and polarised light stress images. After segmenting the seal area from the rest of the respective image, a classifier is trained to detect faults in different regions of the seal area using features extracted from the pixels in the respective region. A very large set of candidate features, based on statistical information relating to the colour and texture of each region, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating faults from non-faults. With this approach, different features can be selected and optimised for the different imaging methods. In experiments we compare the performance of classifiers trained using features extracted from laser scatter images only, polarised light stress images only, and a combination of both image types. The results show that the polarised light and laser scatter classifiers achieved accuracies of 96$\backslash$\% and 90$\backslash$\%, respectively, while the combination of both sensors achieved an accuracy of 95$\backslash$\%. These figures suggest that both systems have potential for commercial development.

    @article{lirolem5513,
              volume = {112},
              number = {3},
               month = {October},
              author = {Michael Barnes and Michael Dudbridge and Tom Duckett},
               title = {Polarised light stress analysis and laser scatter imaging for non-contact inspection of heat seals in food trays},
           publisher = {Elsevier},
                year = {2012},
             journal = {Journal of Food Engineering},
               pages = {183--190},
            keywords = {ARRAY(0x7fdc7805f418)},
                 url = {http://eprints.lincoln.ac.uk/5513/},
            abstract = {This paper introduces novel non-contact methods for detecting faults in heat seals of food packages. Two alternative imaging technologies are investigated; laser scatter imaging and polarised light stress images. After segmenting the seal area from the rest of the respective image, a classifier is trained to detect faults in different regions of the seal area using features extracted from the pixels in the respective region. A very large set of candidate features, based on statistical information relating to the colour and texture of each region, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating faults from non-faults. With this approach, different features can be selected and optimised for the different imaging methods. In experiments we compare the performance of classifiers trained using features extracted from laser scatter images only, polarised light stress images only, and a combination of both image types. The results show that the polarised light and laser scatter classifiers achieved accuracies of 96{$\backslash$}\% and 90{$\backslash$}\%, respectively, while the combination of both sensors achieved an accuracy of 95{$\backslash$}\%. These figures suggest that both systems have potential for commercial development.}
    }
  • N. Bellotto, “Robot control based on qualitative representation of human trajectories,” in AAAI Spring Symposium, "Designing Intelligent Robots: Reintegrating AI", 2012.
    [BibTeX] [Abstract] [EPrints]

    A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn.

    @inproceedings{lirolem4780,
           booktitle = { AAAI Spring Symposium, "Designing Intelligent Robots: Reintegrating AI"},
               month = {March},
               title = {Robot control based on qualitative representation of human trajectories},
              author = {Nicola Bellotto},
           publisher = {AAAI - Association for the Advancement of Artificial Intelligence},
                year = {2012},
                note = {A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn.},
            keywords = {ARRAY(0x7fdc7804cd10)},
                 url = {http://eprints.lincoln.ac.uk/4780/},
            abstract = {A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn.}
    }
  • N. Bellotto, B. Benfold, H. Harland, H. Nagel, N. Pirlo, I. Reid, E. Sommerlade, and C. Zhao, “Cognitive visual tracking and camera control,” Computer Vision and Image Understanding, vol. 116, iss. 3, pp. 457-471, 2012.
    [BibTeX] [Abstract] [EPrints]

    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.

    @article{lirolem4823,
              volume = {116},
              number = {3},
               month = {March},
              author = {Nicola Bellotto and Ben Benfold and Hanno Harland and Hans-Hellmut Nagel and Nicola Pirlo and Ian Reid and Eric Sommerlade and Chuan Zhao},
               title = {Cognitive visual tracking and camera control},
           publisher = {Elsevier},
                year = {2012},
             journal = {Computer Vision and Image Understanding},
               pages = {457--471},
            keywords = {ARRAY(0x7fdc78038da0)},
                 url = {http://eprints.lincoln.ac.uk/4823/},
            abstract = {Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.}
    }
  • T. Belpaeme, P. Baxter, R. Read, R. Wood, H. Cuayáhuitl, B. Kiefer, S. Racioppa, I. Kruijff-Korbayová, G. Athanasopoulos, V. Enescu, R. Looije, M. Neerincx, Y. Demiris, R. Ros-Espinoza, A. Beck, L. Cañamero, A. Hiolle, M. Lewis, I. Baroni, M. Nalin, P. Cosi, G. Paci, F. Tesser, G. Sommavilla, and R. Humbert, “Multimodal child-robot interaction: building social bonds,” Journal of Human-Robot Interaction, vol. 1, iss. 2, 2012.
    [BibTeX] [Abstract] [EPrints]

    For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation ?in the wild.? The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.

    @article{lirolem22210,
              volume = {1},
              number = {2},
               month = {December},
              author = {Tony Belpaeme and Paul Baxter and Robin Read and Rachel Wood and Heriberto Cuay{\'a}huitl and Bernd Kiefer and Stefania Racioppa and Ivana Kruijff-Korbayov{\'a} and Georgios Athanasopoulos and Valentin Enescu and Rosemarijn Looije and Mark Neerincx and Yiannis Demiris and Raquel Ros-Espinoza and Aryel Beck and Lola Ca{\~n}amero and Antione Hiolle and Matthew Lewis and Ilaria Baroni and Marco Nalin and Piero Cosi and Giulio Paci and Fabio Tesser and Giacomo Sommavilla and Remi Humbert},
               title = {Multimodal child-robot interaction: building social bonds},
           publisher = {Clear Facts Research},
             journal = {Journal of Human-Robot Interaction},
                year = {2012},
            keywords = {ARRAY(0x7fdc78043980)},
                 url = {http://eprints.lincoln.ac.uk/22210/},
            abstract = {For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation ?in the wild.? The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.}
    }
  • J. Bird, T. Feltwell, and G. Cielniak, “Real-time adaptive track generation in racing games,” in GAMEON ‘2012, 2012, pp. 17-24.
    [BibTeX] [Abstract] [EPrints]

    Real-time Adaptive Track Generation in Racing Games

    @inproceedings{lirolem6900,
               month = {November},
              author = {Jake Bird and Tom Feltwell and Grzegorz Cielniak},
                note = {Real-time Adaptive Track Generation in Racing Games},
           booktitle = {GAMEON '2012},
               title = {Real-time adaptive track generation in racing games},
           publisher = {Eurosis},
               pages = {17--24},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f460)},
                 url = {http://eprints.lincoln.ac.uk/6900/},
            abstract = {Real-time Adaptive Track Generation in Racing Games}
    }
  • G. Cielniak, N. Bellotto, and T. Duckett, “Integrating vision and robotics into the computer science curriculum,” in 3rd International Workshop Teaching Robotics Teaching with Robotics: Integrating Robotics in School Curriculum, 2012.
    [BibTeX] [Abstract] [EPrints]

    This paper describes our efforts in integrating Robotics education into the undergraduate Computer Science curriculum. Our approach delivers Mobile Robotics together with the closely related field of Computer Vision and is directly linked to the research conducted at our institution. The paper describes the most relevant details related to the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile webcams. We discuss the specific choices made with regard to the mobile platform, software libraries and lab environment. We also present a detailed qualitative and quantitative analysis, including the correlation between student engagement and performance, and discuss the outcomes of this experience.

    @inproceedings{lirolem5516,
           booktitle = {3rd International Workshop Teaching Robotics Teaching with Robotics: Integrating Robotics in School Curriculum},
               month = {April},
               title = {Integrating vision and robotics into the computer science curriculum},
              author = {Grzegorz Cielniak and Nicola Bellotto and Tom Duckett},
                year = {2012},
                note = {This paper describes our efforts in integrating Robotics education into the undergraduate Computer Science curriculum. Our approach delivers Mobile Robotics together with the closely related field of Computer Vision and is directly linked to the research conducted at our institution. The paper describes the most relevant details related to the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile webcams. We discuss the specific choices made with regard to the mobile platform, software libraries and lab environment. We also present a detailed qualitative and quantitative analysis, including the correlation between student engagement and performance, and discuss the outcomes of this experience.},
            keywords = {ARRAY(0x7fdc78086778)},
                 url = {http://eprints.lincoln.ac.uk/5516/},
            abstract = {This paper describes our efforts in integrating Robotics education into the undergraduate Computer Science curriculum. Our approach delivers Mobile Robotics together with the closely related field of Computer Vision and is directly linked to the research conducted at our institution. The paper describes the most relevant details related to the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile webcams. We discuss the specific choices made with regard to the mobile platform, software libraries and lab environment. We also present a detailed qualitative and quantitative analysis, including the correlation between student engagement and performance, and discuss the outcomes of this experience.}
    }
  • T. Feltwell, P. Dickinson, and G. Cielniak, “A framework for quantitative analysis of user-generated spatial data,” in GAMEON ‘2012, 2012, pp. 17-24.
    [BibTeX] [Abstract] [EPrints]

    This paper proposes a new framework for automated analysis of game-play metrics for aiding game designers in finding out the critical aspects of the game caused by factors like design modications, change in playing style, etc. The core of the algorithm measures similarity between spatial distribution of user generated in-game events and automatically ranks them in order of importance. The feasibility of the method is demonstrated on a data set collected from a modern, multiplayer First Person Shooter, together with application examples of its use. The proposed framework can be used to accompany traditional testing tools and make the game design process more efficient.

    @inproceedings{lirolem6889,
               month = {November},
              author = {Tom Feltwell and Patrick Dickinson and Grzegorz Cielniak},
                note = {This paper proposes a new framework for automated
    analysis of game-play metrics for aiding game designers
    in finding out the critical aspects of the game caused
    by factors like design modications, change in playing
    style, etc. The core of the algorithm measures similarity
    between spatial distribution of user generated in-game
    events and automatically ranks them in order of importance. The feasibility of the method is demonstrated on
    a data set collected from a modern, multiplayer First
    Person Shooter, together with application examples of
    its use. The proposed framework can be used to accompany traditional testing tools and make the game design
    process more efficient.},
           booktitle = {GAMEON '2012},
               title = {A framework for quantitative analysis of user-generated spatial data},
           publisher = {Eurosis},
               pages = {17--24},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f490)},
                 url = {http://eprints.lincoln.ac.uk/6889/},
            abstract = {This paper proposes a new framework for automated
    analysis of game-play metrics for aiding game designers
    in finding out the critical aspects of the game caused
    by factors like design modications, change in playing
    style, etc. The core of the algorithm measures similarity
    between spatial distribution of user generated in-game
    events and automatically ranks them in order of importance. The feasibility of the method is demonstrated on
    a data set collected from a modern, multiplayer First
    Person Shooter, together with application examples of
    its use. The proposed framework can be used to accompany traditional testing tools and make the game design
    process more efficient.}
    }
  • S. Ghidoni, G. Cielniak, and E. Menegatti, “Texture-based crowd detection and localisation,” in The 12th International Conference on Intelligent Autonomous Systems, 2012, pp. 725-736.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a crowd detection system based on texture analysis. The state-of-the-art techniques based on co-occurrence matrix have been revisited and a novel set of features proposed. These features provide a richer description of the co-occurrence matrix, and can be exploited to obtain stronger classification results, especially when smaller portions of the image are considered. This is extremely useful for crowd localisation: acquired images are divided into smaller regions in order to perform a classification on each one. A thorough evaluation of the proposed system on a real world data set is also presented: this validates the improvements in reliability of the crowd detection and localisation.

    @inproceedings{lirolem5935,
           booktitle = {The 12th International Conference on Intelligent Autonomous Systems},
               month = {June},
               title = {Texture-based crowd detection and localisation},
              author = {Stefano Ghidoni and Grzegorz Cielniak and Emanuele Menegatti},
           publisher = {IEEE / Robotics and Automation Society},
                year = {2012},
               pages = {725--736},
            keywords = {ARRAY(0x7fdc78082710)},
                 url = {http://eprints.lincoln.ac.uk/5935/},
            abstract = {This paper presents a crowd detection system based on texture analysis. The state-of-the-art techniques based on co-occurrence matrix have been revisited and a novel set of features proposed. These features provide a richer description of the co-occurrence matrix, and can be exploited to obtain stronger classification results, especially when smaller portions of the image are considered. This is extremely useful for crowd localisation: acquired images are divided into smaller regions in order to perform a classification on each one. A thorough evaluation of the proposed system on a real world data set is also presented: this validates the improvements in reliability of the crowd detection and localisation.}
    }
  • M. Hanheide, M. Lohse, and H. Zender, “Expectations, intentions, and actions in human-robot interaction,” Internation Journal of Social Robotics, vol. 4, iss. 2, pp. 107-108, 2012.
    [BibTeX] [Abstract] [EPrints]

    From the issue entitled "Expectations, Intentions & Actions" Human-robot interaction is becoming increasingly complex through the growing number of abilities, both cognitive and physical, available to today?s robots. At the same time, interaction is still often dif?cult because the users do not understand the robots? internal states, expectations, intentions, and actions. Vice versa, robots lack understanding of the users? expectations, intentions, actions, and social signals.

    @article{lirolem6562,
              volume = {4},
              number = {2},
               month = {April},
              author = {M. Hanheide and M. Lohse and H. Zender},
                note = {From the issue entitled "Expectations, Intentions \& Actions"
    Human-robot interaction is becoming increasingly complex
    through the growing number of abilities, both cognitive and
    physical, available to today?s robots. At the same time, interaction is still often dif?cult because the users do not understand the robots? internal states, expectations, intentions, and
    actions. Vice versa, robots lack understanding of the users?
    expectations, intentions, actions, and social signals.},
               title = {Expectations, intentions, and actions in human-robot interaction},
           publisher = {Springer},
                year = {2012},
             journal = {Internation Journal of Social Robotics},
               pages = {107--108},
            keywords = {ARRAY(0x7fdc780867f0)},
                 url = {http://eprints.lincoln.ac.uk/6562/},
            abstract = {From the issue entitled "Expectations, Intentions \& Actions"
    Human-robot interaction is becoming increasingly complex
    through the growing number of abilities, both cognitive and
    physical, available to today?s robots. At the same time, interaction is still often dif?cult because the users do not understand the robots? internal states, expectations, intentions, and
    actions. Vice versa, robots lack understanding of the users?
    expectations, intentions, actions, and social signals.}
    }
  • M. Hanheide, A. Peters, and N. Bellotto, “Analysis of human-robot spatial behaviour applying a qualitative trajectory calculus,” in 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, pp. 689-694.
    [BibTeX] [Abstract] [EPrints]

    The analysis and understanding of human-robot joint spatial behaviour (JSB) such as guiding, approaching, departing, or coordinating movements in narrow spaces and its communicative and dynamic aspects are key requirements on the road towards more intuitive interaction, safe encounter, and appealing living with mobile robots. This endeavours demand for appropriate models and methodologies to represent JSB and facilitate its analysis. In this paper, we adopt a qualitative trajectory calculus (QTC) as a formal foundation for the analysis and representation of such spatial behaviour of a human and a robot based on a compact encoding of the relative trajectories of two interacting agents in a sequential model. We present this QTC together with a distance measure and a probabilistic behaviour model and outline its usage in an actual JSB study.We argue that the proposed QTC coding scheme and derived methodologies for analysis and modelling are flexible and extensible to be adapted for a variety of other scenarios and studies. I.

    @inproceedings{lirolem6750,
               month = {September},
              author = {Marc Hanheide and Annika Peters and Nicola Bellotto},
           booktitle = {21st IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Analysis of human-robot spatial behaviour applying a qualitative trajectory calculus},
           publisher = {IEEE},
               pages = {689--694},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f5b0)},
                 url = {http://eprints.lincoln.ac.uk/6750/},
            abstract = {The analysis and understanding of human-robot joint spatial behaviour (JSB) such as guiding, approaching, departing, or coordinating movements in narrow spaces and its communicative and dynamic aspects are key requirements on the road towards more intuitive interaction, safe encounter, and appealing living with mobile robots. This endeavours demand for appropriate models and methodologies to represent JSB and facilitate its analysis. In this paper, we adopt a qualitative trajectory calculus (QTC) as a formal foundation for the analysis and representation of such spatial behaviour of a human and a robot based on a compact encoding of the relative trajectories of two interacting agents in a sequential model. We present this QTC together with a distance measure and a probabilistic behaviour model and outline its usage in an actual JSB study.We argue that the proposed QTC coding scheme and derived methodologies for analysis and modelling are flexible and extensible to be adapted for a variety of other scenarios and studies. I.}
    }
  • J. Hutton, G. Harper, and T. Duckett, “A prototype low-cost machine vision system for automatic identification and quantification of potato defects,” in The Dundee Conference – Crop Protection in Northern Britain 2012, 2012, pp. 273-278.
    [BibTeX] [Abstract] [EPrints]

    This paper reports on a current project to develop a prototype system for the automatic identification and quantification of potato defects based on machine vision. The system developed uses off-the-shelf hardware, including a low-cost vision sensor and a standard desktop computer with a graphics processing unit (GPU), together with software algorithms to enable detection, identification and quantification of common defects affecting potatoes at near-real-time frame rates. The system uses state-of-the-art image processing and machine learning techniques to automatically learn the appearance of different defect types. It also incorporates an intuitive graphical user interface (GUI) to enable easy set-up of the system by quality control (QC) staff working in the industry.

    @inproceedings{lirolem14511,
           booktitle = {The  Dundee Conference  - Crop  Protection  in  Northern  Britain  2012},
               month = {February},
               title = {A prototype low-cost machine vision system for automatic identification and quantification of potato defects},
              author = {Jamie Hutton and Glyn Harper and Tom Duckett},
           publisher = {Proceedings Crop Protection in Northern Britain 2012},
                year = {2012},
               pages = {273--278},
            keywords = {ARRAY(0x7fdc780862b0)},
                 url = {http://eprints.lincoln.ac.uk/14511/},
            abstract = {This paper reports on a current project to develop a prototype system
    for the automatic identification and quantification of potato defects based on
    machine vision. The system developed uses off-the-shelf hardware, including a
    low-cost vision sensor and a standard desktop computer with a graphics processing
    unit (GPU), together with software algorithms to enable detection, identification
    and quantification of common defects affecting potatoes at near-real-time frame
    rates. The system uses state-of-the-art image processing and machine learning
    techniques to automatically learn the appearance of different defect types. It also
    incorporates an intuitive graphical user interface (GUI) to enable easy set-up of the
    system by quality control (QC) staff working in the industry.}
    }
  • C. Jayne, S. Yue, and L. Iliadis, Engineering applications of neural networks, Heidelberg: Springer, 2012, vol. 311.
    [BibTeX] [Abstract] [EPrints]

    Proceeedings of the 13th International Conference, EANN 2012, London, UK, September 20-23, 2012

    @book{lirolem7434,
              volume = {311},
              author = {Crisina Jayne and Shigang Yue and Lazaros Iliadis},
              series = {Communications in computer and information science},
                note = {Proceeedings of the 13th International Conference, EANN 2012, London, UK, September 20-23, 2012},
             address = {Heidelberg},
               title = {Engineering applications of neural networks},
           publisher = {Springer},
                year = {2012},
            keywords = {ARRAY(0x7fdc78086508)},
                 url = {http://eprints.lincoln.ac.uk/7434/},
            abstract = {Proceeedings of the 13th International Conference, EANN 2012, London, UK, September 20-23, 2012}
    }
  • C. Jayne, S. Yue, and L. Iliadis, “Engineering Applications of Neural Networks: 13th International Conference, EANN 2012 London, UK, September 20-23, 2012 Proceedings,” in 13th International Conference, EANN 2012, Chengdu, 2012.
    [BibTeX] [Abstract] [EPrints]

    .

    @inproceedings{lirolem11609,
              volume = {311},
               month = {September},
              author = {Chrisina Jayne and Shigang Yue and Lazaros Iliadis},
                note = { Conference Code:98083},
           booktitle = {13th International Conference, EANN 2012},
             address = {Chengdu},
               title = {Engineering Applications of Neural Networks: 13th International Conference, EANN 2012 London, UK, September 20-23, 2012 Proceedings},
           publisher = {Springer},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f610)},
                 url = {http://eprints.lincoln.ac.uk/11609/},
            abstract = {.}
    }
  • C. Lang, S. Wachsmuth, M. Hanheide, and H. Wersing, “Facial communicative signals: valence recognition in task-oriented human-robot interaction,” International Journal of Social Robotics, vol. 4, iss. 3, pp. 249-262, 2012.
    [BibTeX] [Abstract] [EPrints]

    From the issue entitled "Measuring Human-Robots Interactions" This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot?s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates

    @article{lirolem6561,
              volume = {4},
              number = {3},
               month = {August},
              author = {Christian Lang and Sven Wachsmuth and Marc Hanheide and Heiko Wersing},
                note = {From the issue entitled "Measuring Human-Robots Interactions"
    This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot?s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates},
               title = {Facial communicative signals: valence recognition in task-oriented human-robot interaction},
           publisher = {Springer},
                year = {2012},
             journal = {International Journal of Social Robotics},
               pages = {249--262},
            keywords = {ARRAY(0x7fdc7805f5c8)},
                 url = {http://eprints.lincoln.ac.uk/6561/},
            abstract = {From the issue entitled "Measuring Human-Robots Interactions"
    This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot?s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates}
    }
  • S. Liu, Y. Tang, C. Zhang, and S. Yue, “Self-map building in wireless sensor network based on TDOA measurements,” in IASTED International Conference on Artificial Intelligence and Soft Computing, Hamburg, 2012, pp. 150-155.
    [BibTeX] [Abstract] [EPrints]

    Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network’s applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node’s absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping’s accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms. Â\copyright 2012 IEEE.

    @inproceedings{lirolem10769,
               month = {September},
              author = {S. Liu and Y. Tang and C. Zhang and S. Yue},
                note = {Conference Code:94291},
           booktitle = {IASTED International Conference on Artificial Intelligence and Soft Computing},
             address = {Hamburg},
               title = {Self-map building in wireless sensor network based on TDOA measurements},
           publisher = {IASTED},
                year = {2012},
               pages = {150--155},
            keywords = {ARRAY(0x7fdc7805f640)},
                 url = {http://eprints.lincoln.ac.uk/10769/},
            abstract = {Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network's applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node's absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping's accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms. {\^A}{\copyright} 2012 IEEE.}
    }
  • M. Mangan and B. Webb, “Spontaneous formation of multiple routes in individual desert ants (Cataglyphis velox),” Behavioral Ecology, vol. 23, iss. 5, pp. 944-954, 2012.
    [BibTeX] [Abstract] [EPrints]

    Desert ants make use of various navigational techniques, including path integration and visual route following, to forage efficiently in their extremely hostile environment. Species-specific differences in navigation have been demonstrated, although it remains unknown if these divergences are caused by environmental adaptation. In this work, we report on the navigational strategies of the European ant Cataglyphis velox, which inhabits a visually cluttered environment similar to the Australian honey ant Melophorus bagoti, although it is more closely related to other North African Cataglyphis species. We show that C. velox learn visually guided routes, and these are individual to each forager. Routes can be recalled in the absence of global path integration information or when placed in conflict with this information. Individual C. velox foragers are also shown to learn multiple routes through their habitat. These routes are learned rapidly, stored in long-term memory, and recalled for guidance as appropriate. Desert ants have previously been shown to learn multiple routes in an experimental manipulation, but this is the first report of such behavior emerging spontaneously. Learning multiple paths through the habitat over successive journeys provides a mechanism by which ants could memorize a series of interlaced courses, and thus perform complex navigation, without necessarily having a map of the environment. Key words: Cataglyphis velox, desert ant, foraging, learning, navigation, route, visual navigation. [Behav Ecol]

    @article{lirolem23577,
              volume = {23},
              number = {5},
               month = {September},
              author = {Michael Mangan and Barbara Webb},
               title = {Spontaneous formation of multiple routes in individual desert ants (Cataglyphis velox)},
           publisher = {Oxford University Press for  International Society for Behavioral Ecology},
                year = {2012},
             journal = {Behavioral Ecology},
               pages = {944--954},
            keywords = {ARRAY(0x7fdc7805f538)},
                 url = {http://eprints.lincoln.ac.uk/23577/},
            abstract = {Desert ants make use of various navigational techniques, including path integration and visual route following, to forage
    efficiently in their extremely hostile environment. Species-specific differences in navigation have been demonstrated, although
    it remains unknown if these divergences are caused by environmental adaptation. In this work, we report on the navigational
    strategies of the European ant Cataglyphis velox, which inhabits a visually cluttered environment similar to the Australian honey
    ant Melophorus bagoti, although it is more closely related to other North African Cataglyphis species. We show that C. velox learn
    visually guided routes, and these are individual to each forager. Routes can be recalled in the absence of global path integration
    information or when placed in conflict with this information. Individual C. velox foragers are also shown to learn multiple routes
    through their habitat. These routes are learned rapidly, stored in long-term memory, and recalled for guidance as appropriate.
    Desert ants have previously been shown to learn multiple routes in an experimental manipulation, but this is the first report
    of such behavior emerging spontaneously. Learning multiple paths through the habitat over successive journeys provides
    a mechanism by which ants could memorize a series of interlaced courses, and thus perform complex navigation, without
    necessarily having a map of the environment. Key words: Cataglyphis velox, desert ant, foraging, learning, navigation, route, visual
    navigation. [Behav Ecol]}
    }
  • O. Szymanezyk, T. Duckett, and P. Dickinson, “Agent-based crowd simulation in airports using games technology,” in Transactions on computational collective intelligence , SPRINGER, 2012.
    [BibTeX] [Abstract] [EPrints]

    We adapt popular video-games technology for an agent-based crowd simulation framework in an airport terminal. To achieve this, we investigate game technology, crowd simulation and the unique traits of airports. Our findings are implemented in a virtual airport environment that exploits a scalable layered intelligence technique in combination with physics middleware and a social force approach for crowd simulation. Our experiments show that the framework runs at interactive frame-rate and evaluate the scalability with increasing number of agents demonstrating event triggered airport behaviour.

    @incollection{lirolem6574,
               month = {October},
              author = {Oliver Szymanezyk and Tom Duckett and Patrick Dickinson},
              series = {Lecture Notes in Computer Science},
                note = {Volume VIII, Issue 7430},
           booktitle = {Transactions on computational collective intelligence },
               title = {Agent-based crowd simulation in airports using games technology},
           publisher = {SPRINGER},
                year = {2012},
            keywords = {ARRAY(0x7fdc7805f4f0)},
                 url = {http://eprints.lincoln.ac.uk/6574/},
            abstract = {We adapt popular video-games technology for an agent-based crowd simulation framework in an airport terminal. To achieve this, we investigate game technology, crowd simulation and the unique traits of airports. Our findings are implemented in a virtual airport environment that exploits a scalable layered intelligence technique in combination with physics middleware and a social force approach for crowd simulation. Our experiments show that
    the framework runs at interactive frame-rate and evaluate the scalability with increasing number of agents demonstrating event triggered airport behaviour.}
    }
  • Y. Tang, J. Peng, S. Yue, and J. Xu, “A primal dual proximal point method of Chambolle-Pock algorithms for ?1-TV minimization problems in image reconstruction,” in 5th International Conference on Biomedical Engineering and Informatics, BMEI 2012, Chongqing, 2012, pp. 12-16.
    [BibTeX] [Abstract] [EPrints]

    Computed tomography (CT) image reconstruction problems can be solved by finding the minimizer of a suitable objective function. The objective function usually consists of a data fidelity term and a regularization term. Total variation (TV) minimization problems are widely used for solving incomplete data problems in CT image reconstruction. In this paper, we focus on the CT image reconstruction model which combines the TV regularization and ?1 data error term. We introduce a primal dual proximal point method of Chambolle-Pock algorithm to solve the proposed optimization problem. We tested it on computer simulated data and the experiment results shown it exhibited good performance when used to few-view CT image reconstruction. \copyright 2012 IEEE.

    @inproceedings{lirolem13409,
              author = {Y. Tang and Jigen Peng and Shigang Yue and Jiawei Xu},
           booktitle = {5th International Conference on Biomedical Engineering and Informatics, BMEI 2012},
             address = {Chongqing},
               title = {A primal dual proximal point method of Chambolle-Pock algorithms for ?1-TV minimization problems in image reconstruction},
           publisher = {IEEE},
             journal = {2012 5th International Conference on Biomedical Engineering and Informatics, BMEI 2012},
               pages = {12--16},
                year = {2012},
            keywords = {ARRAY(0x7fdc78086808)},
                 url = {http://eprints.lincoln.ac.uk/13409/},
            abstract = {Computed tomography (CT) image reconstruction problems can be solved by finding the minimizer of a suitable objective function. The objective function usually consists of a data fidelity term and a regularization term. Total variation (TV) minimization problems are widely used for solving incomplete data problems in CT image reconstruction. In this paper, we focus on the CT image reconstruction model which combines the TV regularization and ?1 data error term. We introduce a primal dual proximal point method of Chambolle-Pock algorithm to solve the proposed optimization problem. We tested it on computer simulated data and the experiment results shown it exhibited good performance when used to few-view CT image reconstruction. {\copyright} 2012 IEEE.}
    }
  • P. S. Teh, S. Yue, and A. B. J. Teoh, “Improving keystroke dynamics authentication system via multiple feature fusion scheme,” in 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), Kuala Lumpur, 2012, pp. 277-282.
    [BibTeX] [Abstract] [EPrints]

    This paper reports the performance and effect of diverse keystroke features combination on keystroke dynamic authentication system by using fusion scheme. First of all, four types of keystroke features are acquired from our collected dataset, later then transformed into similarity scores by the use of Gaussian Probability Density Function (GPD) and Direction Similarity Measure (DSM). Next, three fusion schemes are introduced to merge the scores pairing with six fusion rules. Result shows that the finest performance is obtained by the combination of both dwell time and flight time collectively. Finally, this experiment also investigates the effect of using larger dataset on performance, which turns out to be rather consistent. Â\copyright 2012 IEEE.

    @inproceedings{lirolem10860,
              author = {Pin Shen Teh and Shigang Yue and A. B. J. Teoh},
                note = {Conference Code:92830},
           booktitle = {2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensic (CyberSec)},
             address = {Kuala Lumpur},
               title = {Improving keystroke dynamics authentication system via multiple feature fusion scheme},
           publisher = {IEEE},
               pages = {277--282},
                year = {2012},
            keywords = {ARRAY(0x7fdc78041680)},
                 url = {http://eprints.lincoln.ac.uk/10860/},
            abstract = {This paper reports the performance and effect of diverse keystroke features combination on keystroke dynamic authentication system by using fusion scheme. First of all, four types of keystroke features are acquired from our collected dataset, later then transformed into similarity scores by the use of Gaussian Probability Density Function (GPD) and Direction Similarity Measure (DSM). Next, three fusion schemes are introduced to merge the scores pairing with six fusion rules. Result shows that the finest performance is obtained by the combination of both dwell time and flight time collectively. Finally, this experiment also investigates the effect of using larger dataset on performance, which turns out to be rather consistent. {\^A}{\copyright} 2012 IEEE.}
    }
  • Y. Utsumi, E. Sommerlade, N. Bellotto, and I. Reid, “Cognitive active vision for human identification,” in IEEE International Conference on Robotics and Automation (ICRA 2012), 2012.
    [BibTeX] [Abstract] [EPrints]

    We describe an integrated, real-time multi-camera surveillance system that is able to find and track individuals, acquire and archive facial image sequences, and perform face recognition. The system is based around an inference engine that can extract high-level information from an observed scene, and generate appropriate commands for a set of pan-tilt-zoom (PTZ) cameras. The incorporation of a reliable facial recognition into the high-level feedback is a main novelty of our work, showing how high-level understanding of a scene can be used to deploy PTZ sensing resources effectively. The system comprises a distributed camera system using SQL tables as virtual communication channels, Situation Graph Trees for knowledge representation, inference and high-level camera control, and a variety of visual processing algorithms including an on-line acquisition of facial images, and on-line recognition of faces by comparing image sets using subspace distance. We provide an extensive evaluation of this method using our system for both acquisition of training data, and later recognition. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.

    @inproceedings{lirolem4836,
           booktitle = {IEEE International Conference on Robotics and Automation (ICRA 2012)},
               month = {May},
               title = {Cognitive active vision for human identification},
              author = {Yuzuko Utsumi and Eric Sommerlade and Nicola Bellotto and Ian Reid},
                year = {2012},
                note = {We describe an integrated, real-time multi-camera surveillance system that is able to find and track individuals, acquire and archive facial image sequences, and perform face recognition. The system is based around an inference engine that can extract high-level information from an observed scene, and generate appropriate commands for a set of pan-tilt-zoom (PTZ) cameras. The incorporation of a reliable facial recognition into the high-level feedback is a main novelty of our work, showing how high-level understanding of a scene can be used to deploy PTZ sensing resources effectively. The system comprises a distributed camera system using SQL tables as virtual communication channels, Situation Graph
    Trees for knowledge representation, inference and high-level camera control, and a variety of visual processing algorithms including an on-line acquisition of facial images, and on-line recognition of faces by comparing image sets using subspace distance. We provide an extensive evaluation of this method using our system for both acquisition of training data, and later recognition. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.},
            keywords = {ARRAY(0x7fdc78152f80)},
                 url = {http://eprints.lincoln.ac.uk/4836/},
            abstract = {We describe an integrated, real-time multi-camera surveillance system that is able to find and track individuals, acquire and archive facial image sequences, and perform face recognition. The system is based around an inference engine that can extract high-level information from an observed scene, and generate appropriate commands for a set of pan-tilt-zoom (PTZ) cameras. The incorporation of a reliable facial recognition into the high-level feedback is a main novelty of our work, showing how high-level understanding of a scene can be used to deploy PTZ sensing resources effectively. The system comprises a distributed camera system using SQL tables as virtual communication channels, Situation Graph
    Trees for knowledge representation, inference and high-level camera control, and a variety of visual processing algorithms including an on-line acquisition of facial images, and on-line recognition of faces by comparing image sets using subspace distance. We provide an extensive evaluation of this method using our system for both acquisition of training data, and later recognition. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.}
    }
  • R. Wood, P. Baxter, and T. Belpaeme, “A Review of long-term memory in natural and synthetic systems,” Adaptive Behavior, vol. 20, iss. 2, pp. 81-103, 2012.
    [BibTeX] [Abstract] [EPrints]

    Memory may be broadly regarded as information gained from past experi- ence which is available in the service of ongoing and future adaptive behavior. The biological implementation ofmemory shares little with memory in synthetic cognitive systems where it is typically regarded as a passive storage structure. Neurophysiological evidence indicates that memory is neither passive nor cen- tralised. A review of the relevant literature in the biological and computer sciences is conducted and a novel methodology is applied that incorporates neuroethological approaches with general biological inspiration in the design of synthetic cognitive systems: a case study regarding episodic memory provides an illustration of the utility of this methodology. As a consequence of applying this approach to the reinterpretation of the implementation of memory in syn- thetic systems, four fundamental functional principles are derived that are in accordance with neuroscientific theory, and which may be applied to the design of more adaptive and robust synthetic cognitive systems: priming, cross-modal associations, cross-modal coordination without semantic information transfer, and global system behavior resulting from activation dynamics within the mem- ory system.

    @article{lirolem23079,
              volume = {20},
              number = {2},
               month = {April},
              author = {Rachel Wood and Paul Baxter and Tony Belpaeme},
               title = {A Review of long-term memory in natural and synthetic systems},
           publisher = {Sage for International Society for Adaptive Behavior (ISAB)},
                year = {2012},
             journal = {Adaptive Behavior},
               pages = {81--103},
            keywords = {ARRAY(0x7fdc78177778)},
                 url = {http://eprints.lincoln.ac.uk/23079/},
            abstract = {Memory may be broadly regarded as information gained from past experi- ence which is available in the service of ongoing and future adaptive behavior. The biological implementation ofmemory shares little with memory in synthetic cognitive systems where it is typically regarded as a passive storage structure. Neurophysiological evidence indicates that memory is neither passive nor cen- tralised. A review of the relevant literature in the biological and computer sciences is conducted and a novel methodology is applied that incorporates neuroethological approaches with general biological inspiration in the design of synthetic cognitive systems: a case study regarding episodic memory provides an illustration of the utility of this methodology. As a consequence of applying this approach to the reinterpretation of the implementation of memory in syn- thetic systems, four fundamental functional principles are derived that are in accordance with neuroscientific theory, and which may be applied to the design of more adaptive and robust synthetic cognitive systems: priming, cross-modal associations, cross-modal coordination without semantic information transfer, and global system behavior resulting from activation dynamics within the mem- ory system.}
    }
  • J. Xu and S. Yue, “A top-down attention model based on the semi-supervised learning,” in 5th International Conference on Biomedical Engineering and Informatics, BMEI 2012, Chongqing, 2012, pp. 1011-1014.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we proposed a top-down motion tracking model to detect the attention region. Many biological inspired systems have been studied and most of them are consisted by bottom-up mechanisms and top-down processes. Top-down attention is guided by task-driven information that is acquired through learning procedures. Our model improves the top-down mechanisms by using a probability map (PM). The PM follows to track if all the potential locations of targets based on the information contained in the frame sequences. By using this, PM can be regarded as a short term memory for attended saliency regions. This function is similar to the dorsal stream of V1 primary area. The semi-learning model constructs an efficient mechanism for attention detection to simulate the eye movements and fixations in our human visual systems. Generally, our work is to mimic human visual systems and it will further be applied on the robotics platform. From the random selected video clips, our performances are better than other state-of-the-art approaches. Â\copyright 2012 IEEE.

    @inproceedings{lirolem13408,
              author = {Jiawei Xu and Shigang Yue},
           booktitle = {5th International Conference on Biomedical Engineering and Informatics, BMEI 2012},
             address = {Chongqing},
               title = {A top-down attention model based on the semi-supervised learning},
           publisher = {IEEE},
             journal = {2012 5th International Conference on Biomedical Engineering and Informatics, BMEI 2012},
               pages = {1011--1014},
                year = {2012},
            keywords = {ARRAY(0x7fdc781b9d20)},
                 url = {http://eprints.lincoln.ac.uk/13408/},
            abstract = {In this paper, we proposed a top-down motion tracking model to detect the attention region. Many biological inspired systems have been studied and most of them are consisted by bottom-up mechanisms and top-down processes. Top-down attention is guided by task-driven information that is acquired through learning procedures. Our model improves the top-down mechanisms by using a probability map (PM). The PM follows to track if all the potential locations of targets based on the information contained in the frame sequences. By using this, PM can be regarded as a short term memory for attended saliency regions. This function is similar to the dorsal stream of V1 primary area. The semi-learning model constructs an efficient mechanism for attention detection to simulate the eye movements and fixations in our human visual systems. Generally, our work is to mimic human visual systems and it will further be applied on the robotics platform. From the random selected video clips, our performances are better than other state-of-the-art approaches. {\^A}{\copyright} 2012 IEEE.}
    }
  • J. Xu and S. Yue, “Visual based contour detection by using the improved short path finding,” Communications in Computer and Information Science, vol. 311, pp. 145-151, 2012.
    [BibTeX] [Abstract] [EPrints]

    Contour detection is an important characteristic of human vision perception. Humans can easily find the objects contour in a complex visual scene; however, traditional computer vision cannot do well. This paper primarily concerned with how to track the objects contour using a human-like vision. In this article, we propose a biologically motivated computational model to track and detect the objects contour. Even the previous research has proposed some models by using the Dijkstra algorithm 1, our work is to mimic the human eye movement and imitate saccades in our humans. We use natural images with associated ground truth contour maps to assess the performance of the proposed operator regarding the detection of contours while suppressing texture edges. The results show that our method enhances contour detection in cluttered visual scenes more effectively than classical edge detectors proposed by other methods. Â\copyright Springer-Verlag Berlin Heidelberg 2012.

    @article{lirolem11608,
              volume = {311},
               month = {September},
              author = {Jiawei Xu and Shigang Yue},
                note = {13th International Conference, EANN 2012, London, UK, September 20-23, 2012. Conference Code:98083},
             address = {Chengdu},
               title = {Visual based contour detection by using the improved short path finding},
           publisher = {Springer Verlag},
                year = {2012},
             journal = {Communications in Computer and Information Science},
               pages = {145--151},
            keywords = {ARRAY(0x7fdc7805f580)},
                 url = {http://eprints.lincoln.ac.uk/11608/},
            abstract = {Contour detection is an important characteristic of human vision perception. Humans can easily find the objects contour in a complex visual scene; however, traditional computer vision cannot do well. This paper primarily concerned with how to track the objects contour using a human-like vision. In this article, we propose a biologically motivated computational model to track and detect the objects contour. Even the previous research has proposed some models by using the Dijkstra algorithm 1, our work is to mimic the human eye movement and imitate saccades in our humans. We use natural images with associated ground truth contour maps to assess the performance of the proposed operator regarding the detection of contours while suppressing texture edges. The results show that our method enhances contour detection in cluttered visual scenes more effectively than classical edge detectors proposed by other methods. {\^A}{\copyright} Springer-Verlag Berlin Heidelberg 2012.}
    }
  • S. Yue and C. Rind, “Visually stimulated motor control for a robot with a pair of LGMD visual neural networks,” International Journal of Advanced Mechatronic Systems, vol. 4, iss. 5/6, pp. 237-247, 2012.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we proposed a visually stimulated motor control (VSMC) system for autonomous navigation of mobile robots. Inspired from a locusts? motion sensitive interneuron ? lobula giant movement detector (LGMD), the presented VSMC system enables a robot exploring local paths or interacting with dynamic objects effectively using visual input only. The VSMC consists of a pair of LGMD visual neural networks and a simple motor command generator. Each LGMD processes images covering part of the wide field of view and extracts relevant visual cues. The outputs from the two LGMDs are compared and interpreted into executable motor commands directly. These motor commands are then executed by the robot?s wheel control system in real-time to generate corresponded motion adjustment accordingly. Our experiments showed that this bio-inspired VSMC system worked well in different scenarios.

    @article{lirolem9309,
              volume = {4},
              number = {5/6},
               month = {October},
              author = {Shigang Yue and Claire Rind},
                note = {Special Issue on Advanced Application of Modelling, Identification and Control},
               title = {Visually stimulated motor control for a robot with a pair of LGMD visual neural networks},
           publisher = {Inderscience},
                year = {2012},
             journal = {International Journal of Advanced Mechatronic Systems},
               pages = {237--247},
            keywords = {ARRAY(0x7fdc7805f520)},
                 url = {http://eprints.lincoln.ac.uk/9309/},
            abstract = {In this paper, we proposed a visually stimulated motor control (VSMC) system
    for autonomous navigation of mobile robots. Inspired from a locusts? motion sensitive
    interneuron ? lobula giant movement detector (LGMD), the presented VSMC system enables a
    robot exploring local paths or interacting with dynamic objects effectively using visual input
    only. The VSMC consists of a pair of LGMD visual neural networks and a simple motor
    command generator. Each LGMD processes images covering part of the wide field of view and
    extracts relevant visual cues. The outputs from the two LGMDs are compared and interpreted
    into executable motor commands directly. These motor commands are then executed by the
    robot?s wheel control system in real-time to generate corresponded motion adjustment
    accordingly. Our experiments showed that this bio-inspired VSMC system worked well in
    different scenarios.}
    }

2011

  • F. Arvin, S. Doraisamy, K. Samsudin, F. A. Ahmad, and A. R. Ramli, “Implementation of a cue-based aggregation with a swarm robotic system,” in Third Knowledge Technology Week, KTW 2011, 2011, pp. 113-122.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an aggregation behavior using a robot swarm. Swarm robotics takes inspiration from behaviors of social insects. BEECLUST is an aggregation control that is inspired from thermotactic behavior of young honeybees in producing clusters. In this study, aggregation method is implemented with a modification on original BEECLUST. Both aggregations are performed using real and simulated robots. We aim to demonstrate that, a simple change in control of individual robots results in significant changes in collective behavior of the swarm. In addition, the behavior of the swarm is modeled by a macroscopic modeling based on a probability control. The presented model in this study could depict the behavior of swarm throughout the performed scenarios with real and simulated robots.

    @inproceedings{lirolem6086,
              volume = {295},
              number = {2},
               month = {July},
              author = {Farshad Arvin and Shyamala Doraisamy and Khairulmizam Samsudin and Faisul Arif Ahmad and Abdul Rahman  Ramli},
                note = {Third Knowledge Technology Week, KTW 2011, Kajang, Malaysia, July 18-22, 2011. Revised Selected Papers},
           booktitle = {Third Knowledge Technology Week, KTW 2011},
               title = {Implementation of a cue-based aggregation with a swarm robotic system},
           publisher = {Springer},
                year = {2011},
               pages = {113--122},
            keywords = {ARRAY(0x7fdc780822d8)},
                 url = {http://eprints.lincoln.ac.uk/6086/},
            abstract = {This paper presents an aggregation behavior using a robot swarm. Swarm robotics takes inspiration from behaviors of social insects. BEECLUST is an aggregation control that is inspired from thermotactic behavior of young honeybees in producing clusters. In this study, aggregation method is implemented with a modification on original BEECLUST. Both aggregations are performed using real and simulated robots. We aim to demonstrate that, a simple change in control of individual robots results in significant changes in collective behavior of the swarm. In addition, the behavior of the swarm is modeled by a macroscopic modeling based on a probability control. The presented model in this study could depict the behavior of swarm throughout the performed scenarios with real and simulated robots.}
    }
  • F. Arvin, K. Samsudin, A. R. Ramli, and M. Bekravi, “Imitation of honeybee aggregation with collective behavior of swarm robots,” International Journal of Computational Intelligence Systems, vol. 4, iss. 5, pp. 739-748, 2011.
    [BibTeX] [Abstract] [EPrints]

    This paper analyzes the collective behaviors of swarm robots that play role in the aggregation scenario. Honeybee aggregation is an inspired behavior of young honeybees which tend to aggregate around an optimal zone. This aggregation is implemented based on variation of parameters values. In the second phase, two modifications on original honeybee aggregation namely dynamic velocity and comparative waiting time are proposed. Results of the performed experiments showed the significant differences in collective behavior of the swarm system for different algorithms.

    @article{lirolem5515,
              volume = {4},
              number = {5},
               month = {August},
              author = {Farshad Arvin and Khairulmizam Samsudin and Abdul Rahman Ramli  and Masoud Bekravi},
               title = {Imitation of honeybee aggregation with collective behavior of swarm robots},
           publisher = {Taylor \& Francis},
                year = {2011},
             journal = {International Journal of Computational Intelligence Systems},
               pages = {739 --748},
            keywords = {ARRAY(0x7fdc78086670)},
                 url = {http://eprints.lincoln.ac.uk/5515/},
            abstract = {This paper analyzes the collective behaviors of swarm robots that play role in the aggregation scenario. Honeybee
    aggregation is an inspired behavior of young honeybees which tend to aggregate around an optimal zone. This
    aggregation is implemented based on variation of parameters values. In the second phase, two modifications on original honeybee aggregation namely dynamic velocity and comparative waiting time are proposed. Results of the performed experiments showed the significant differences in collective behavior of the swarm system for different algorithms.}
    }
  • F. Arvin, S. Doraisamy, and E. S. Khorasani, “Frequency shifting approach towards textual transcription of heartbeat sounds,” Biological Procedures Online, vol. 13, iss. 7, pp. 1-7, 2011.
    [BibTeX] [Abstract] [EPrints]

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

    @article{lirolem5794,
              volume = {13},
              number = {7},
               month = {November},
              author = {Farshad Arvin and Shyamala Doraisamy and Ehsan Safar Khorasani},
               title = {Frequency shifting approach towards textual transcription of heartbeat sounds},
           publisher = {Springer},
                year = {2011},
             journal = {Biological Procedures Online},
               pages = {1--7},
            keywords = {ARRAY(0x7fdc78086490)},
                 url = {http://eprints.lincoln.ac.uk/5794/},
            abstract = {Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.}
    }
  • R. S. Aylett, G. Castellano, B. Raducanu, A. Paiva, and M. Hanheide, “Long-term socially perceptive and interactive robot companions: challenges and future perspective,” in Conference of 2011 ACM International Conference on Multimodal Interaction, ICMI’11, Alicante, 2011, pp. 323-326.
    [BibTeX] [Abstract] [EPrints]

    This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours. Â\copyright 2011 ACM.

    @inproceedings{lirolem8314,
               month = {November},
              author = {Ruth S. Aylett and Ginevra Castellano and Bogdan Raducanu and Ana Paiva and Marc Hanheide},
                note = {Conference Code: 87685},
           booktitle = {Conference of 2011 ACM International Conference on Multimodal Interaction, ICMI'11},
               title = {Long-term socially perceptive and interactive robot companions: challenges and future perspective},
             address = {Alicante},
           publisher = {ACM},
                year = {2011},
             journal = {ICMI'11 - Proceedings of the 2011 ACM International Conference on Multimodal Interaction},
               pages = {323--326},
            keywords = {ARRAY(0x7fdc78082260)},
                 url = {http://eprints.lincoln.ac.uk/8314/},
            abstract = {This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours. {\^A}{\copyright} 2011 ACM.}
    }
  • V. Belevskiy and S. Yue, “Near range pedestrian collision detection using bio-inspired visual neural networks,” in 2011 Seventh International Conference on Natural Computation, 2011, pp. 786-790.
    [BibTeX] [Abstract] [EPrints]

    New vehicular safety standards require the development of pedestrian collision detection systems that can trigger the deployment of active impact alleviation measures from the vehicle prior to a collision. In this paper, we propose a new vision-based system for near-range pedestrian collision detection. The low-level system uses a bio-inspired visual neural network, which emulates the visual system of the locust, to detect visual cues relevant to objects in front of a moving car. At a higher level, the system employs a neural-network classifier to identify dangerous pedestrian positions, triggering an alarm signal. The system was tuned via simulation and tested using recorded video sequences of real vehicle impacts. The experiment results demonstrate that the system is able to discriminate between pedestrians in dangerous and safe positions, triggering alarms accordingly.

    @inproceedings{lirolem12818,
           booktitle = {2011 Seventh International Conference on Natural Computation},
               month = {July},
               title = {Near range pedestrian collision detection using bio-inspired visual neural networks},
              author = {Vladimir Belevskiy and Shigang Yue},
           publisher = {IEEE},
                year = {2011},
               pages = {786--790},
            keywords = {ARRAY(0x7fdc78186010)},
                 url = {http://eprints.lincoln.ac.uk/12818/},
            abstract = {New vehicular safety standards require the development of pedestrian collision detection systems that can trigger the deployment of active impact alleviation measures from the vehicle prior to a collision. In this paper, we propose a new vision-based system for near-range pedestrian collision detection. The low-level system uses a bio-inspired visual neural network, which emulates the visual system of the locust, to detect visual cues relevant to objects in front of a moving car. At a higher level, the system employs a neural-network classifier to identify dangerous pedestrian positions, triggering an alarm signal. The system was tuned via simulation and tested using recorded video sequences of real vehicle impacts. The experiment results demonstrate that the system is able to discriminate between pedestrians in dangerous and safe positions, triggering alarms accordingly.}
    }
  • H. Cuayahuitl, “Spatially-aware dialogue control using hierarchical reinforcement learning,” ACM Transactions on Speech and Language Processing (TSLP), vol. 7, iss. 3, 2011.
    [BibTeX] [Abstract] [EPrints]

    This article addresses the problem of scalable optimization for spatially-aware dialogue systems. These kinds of systems must perceive, reason, and act about the spatial environment where they are embedded. We formulate the problem in terms of Semi-Markov Decision Processes and propose a hierarchical reinforcement learning approach to optimize subbehaviors rather than full behaviors. Because of the vast number of policies that are required to control the interaction in a dynamic environment (e.g., a dialogue system assisting a user to navigate in a building from one location to another), our learning approach is based on two stages: (a) the first stage learns low-level behavior, in advance; and (b) the second stage learns high-level behavior, in real time. For such a purpose we extend an existing algorithm in the literature of reinforcement learning in order to support reusable policies and therefore to perform fast learning. We argue that our learning approach makes the problem feasible, and we report on a novel reinforcement learning dialogue system that performs a joint optimization between dialogue and spatial behaviors. Our experiments, using simulated and real environments, are based on a text-based dialogue system for indoor navigation. Experimental results in a realistic environment reported an overall user satisfaction result of 89\%, which suggests that our proposed approach is attractive for its application in real interactions as it combines fast learning with adaptive and reasonable behavior.

    @article{lirolem22209,
              volume = {7},
              number = {3},
               month = {May},
              author = {Heriberto Cuayahuitl},
               title = {Spatially-aware dialogue control using hierarchical reinforcement learning},
           publisher = {Association for Computing Machinery},
             journal = {ACM Transactions on Speech and Language Processing (TSLP)},
                year = {2011},
            keywords = {ARRAY(0x7fdc78030ad8)},
                 url = {http://eprints.lincoln.ac.uk/22209/},
            abstract = {This article addresses the problem of scalable optimization for spatially-aware dialogue systems. These kinds of systems must perceive, reason, and act about the spatial environment where they are embedded. We formulate the problem in terms of Semi-Markov Decision Processes and propose a hierarchical reinforcement learning approach to optimize subbehaviors rather than full behaviors. Because of the vast number of policies that are required to control the interaction in a dynamic environment (e.g., a dialogue system assisting a user to navigate in a building from one location to another), our learning approach is based on two stages: (a) the first stage learns low-level behavior, in advance; and (b) the second stage learns high-level behavior, in real time. For such a purpose we extend an existing algorithm in the literature of reinforcement learning in order to support reusable policies and therefore to perform fast learning. We argue that our learning approach makes the problem feasible, and we report on a novel reinforcement learning dialogue system that performs a joint optimization between dialogue and spatial behaviors. Our experiments, using simulated and real environments, are based on a text-based dialogue system for indoor navigation. Experimental results in a realistic environment reported an overall user satisfaction result of 89\%, which suggests that our proposed approach is attractive for its application in real interactions as it combines fast learning with adaptive and reasonable behavior.}
    }
  • F. Dayoub, G. Cielniak, and T. Duckett, “Long-term experiments with an adaptive spherical view representation for navigation in changing environments,” Robotics and Autonomous Systems, vol. 59, iss. 5, pp. 285-295, 2011.
    [BibTeX] [Abstract] [EPrints]

    Real-world environments such as houses and offices change over time, meaning that a mobile robot?s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability.

    @article{lirolem6046,
              volume = {59},
              number = {5},
               month = {May},
              author = {Feras Dayoub and Grzegorz Cielniak and Tom Duckett},
               title = {Long-term experiments with an adaptive spherical view representation for navigation in changing environments},
           publisher = {Elsevier},
                year = {2011},
             journal = {Robotics and Autonomous Systems},
               pages = {285--295},
            keywords = {ARRAY(0x7fdc7804f010)},
                 url = {http://eprints.lincoln.ac.uk/6046/},
            abstract = {Real-world environments such as houses and offices change over time, meaning that a mobile robot?s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability.}
    }
  • F. Dayoub, G. Cielniak, and T. Duckett, “Long-term experiment using an adaptive appearance-based map for visual navigation by mobile robots,” in Towards autonomous robotic systems, Sheffield: Springer, 2011, vol. 6856 , pp. 400-401.
    [BibTeX] [Abstract] [EPrints]

    Building functional and useful mobile service robots means that these robots have to be able to share physical spaces with humans, and to update their internal representation of the world in response to changes in the arrangement of objects and appearance of the environment – changes that may be spontaneous and unpredictable – as a result of human activities. However, almost all past research on robot mapping addresses only the initial learning of an environment, a phase which will only be a short moment in the lifetime of a service robot that may be expected to operate for many years. \copyright 2011 Springer-Verlag Berlin Heidelberg.

    @incollection{lirolem10338,
              volume = {6856 },
               month = {August},
              author = {Feras Dayoub and Grzegorz Cielniak and Tom Duckett},
              series = {Lecture Notes in Computer Science},
                note = {12th Annual Conference, TAROS 2011, Sheffield, UK, August 31 ? September 2, 2011. Proceedings},
           booktitle = {Towards autonomous robotic systems},
               title = {Long-term experiment using an adaptive appearance-based map for visual navigation by mobile robots},
             address = {Sheffield},
           publisher = {Springer},
                year = {2011},
               pages = {400--401},
            keywords = {ARRAY(0x7fdc78081cf0)},
                 url = {http://eprints.lincoln.ac.uk/10338/},
            abstract = {Building functional and useful mobile service robots means that these robots have to be able to share physical spaces with humans, and to update their internal representation of the world in response to changes in the arrangement of objects and appearance of the environment - changes that may be spontaneous and unpredictable - as a result of human activities. However, almost all past research on robot mapping addresses only the initial learning of an environment, a phase which will only be a short moment in the lifetime of a service robot that may be expected to operate for many years. {\copyright} 2011 Springer-Verlag Berlin Heidelberg.}
    }
  • R. Golombek, S. Wrede, M. Hanheide, and M. Heckmann, “Online data-driven fault detection for robotic systems,” in Conference of 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems: Celebrating 50 Years of Robotics, IROS’11, San Francisco, CA, 2011, pp. 3011-3016.
    [BibTeX] [Abstract] [EPrints]

    In this paper we demonstrate the online applicability of the fault detection and diagnosis approach which we previously developed and published in 1. In our former work we showed that a purely data driven fault detection approach can be successfully built based on monitored inter-component communication data of a robotic system and used for a-posteriori fault detection. Here we propose an extension to this approach which is capable of online learning of the fault model as well as for online fault detection. We evaluate the application of our approach in the context of a RoboCup task executed by our service robot BIRON in corporation with an expert user. Â\copyright 2011 IEEE.

    @inproceedings{lirolem8313,
               month = {September},
              author = {R. Golombek and S. Wrede and Marc Hanheide and M. Heckmann},
                note = {Conference Code: 87712},
           booktitle = {Conference of 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems: Celebrating 50 Years of Robotics, IROS'11},
               title = {Online data-driven fault detection for robotic systems},
             address = {San Francisco, CA},
           publisher = {IEEE},
                year = {2011},
             journal = {IEEE International Conference on Intelligent Robots and Systems},
               pages = {3011--3016},
            keywords = {ARRAY(0x7fdc78086358)},
                 url = {http://eprints.lincoln.ac.uk/8313/},
            abstract = {In this paper we demonstrate the online applicability of the fault detection and diagnosis approach which we previously developed and published in 1. In our former work we showed that a purely data driven fault detection approach can be successfully built based on monitored inter-component communication data of a robotic system and used for a-posteriori fault detection. Here we propose an extension to this approach which is capable of online learning of the fault model as well as for online fault detection. We evaluate the application of our approach in the context of a RoboCup task executed by our service robot BIRON in corporation with an expert user. {\^A}{\copyright} 2011 IEEE.}
    }
  • M. Hanheide, C. Gretton, R. W. Dearden, N. A. Hawes, J. L. Wyatt, M. Goedelbecker, A. Pronobis, A. Aydemir, and H. Zender, “Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour,” in Twenty-Second International Joint Conference on Artificial Intelligence, 2011, pp. 2442-2449.
    [BibTeX] [Abstract] [EPrints]

    Robots must perform tasks efficiently and reli- ably while acting under uncertainty. One way to achieve efficiency is to give the robot common- sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by mod- elling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first con- tribution is a probabilistic relational model integrat- ing common-sense knowledge about the world in general, with observations of a particular environ- ment. Our second contribution is a continual plan- ning system which is able to plan in the large prob- lems posed by that model, by automatically switch- ing between decision-theoretic and classical proce- dures. We evaluate our system on object search tasks in two different real-world indoor environ- ments. By reasoning about the trade-offs between possible courses of action with different informa- tional effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.

    @inproceedings{lirolem6756,
               month = {July},
              author = {Marc Hanheide and Charles Gretton and Richard W. Dearden and Nick A. Hawes and Jeremy L. Wyatt and Moritz Goedelbecker and Andrzej Pronobis and Alper Aydemir and Hendrik Zender},
                note = {Robots must perform tasks efficiently and reli- ably while acting under uncertainty. One way to achieve efficiency is to give the robot common- sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by mod- elling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first con- tribution is a probabilistic relational model integrat- ing common-sense knowledge about the world in general, with observations of a particular environ- ment. Our second contribution is a continual plan- ning system which is able to plan in the large prob- lems posed by that model, by automatically switch- ing between decision-theoretic and classical proce- dures. We evaluate our system on object search tasks in two different real-world indoor environ- ments. By reasoning about the trade-offs between possible courses of action with different informa- tional effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.},
           booktitle = {Twenty-Second International Joint Conference on Artificial Intelligence},
              editor = {B. Gottfried and H. Aghajan},
               title = {Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour},
           publisher = {International Joint Conferences on Artiicial Intelligence},
                year = {2011},
               pages = {2442--2449},
            keywords = {ARRAY(0x7fdc780524e8)},
                 url = {http://eprints.lincoln.ac.uk/6756/},
            abstract = {Robots must perform tasks efficiently and reli- ably while acting under uncertainty. One way to achieve efficiency is to give the robot common- sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by mod- elling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first con- tribution is a probabilistic relational model integrat- ing common-sense knowledge about the world in general, with observations of a particular environ- ment. Our second contribution is a continual plan- ning system which is able to plan in the large prob- lems posed by that model, by automatically switch- ing between decision-theoretic and classical proce- dures. We evaluate our system on object search tasks in two different real-world indoor environ- ments. By reasoning about the trade-offs between possible courses of action with different informa- tional effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.}
    }
  • N. Hawes, M. Hanheide, J. Hargreaves, B. Page, H. Zender, and P. Jensfelt, “Home alone: autonomous extension and correction of spatial representations,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, Shanghai, 2011, pp. 3907-3914.
    [BibTeX] [Abstract] [EPrints]

    In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system’s performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems. Â\copyright 2011 IEEE.

    @inproceedings{lirolem8353,
               month = {May},
              author = {N. Hawes and Marc Hanheide and J. Hargreaves and B. Page and H. Zender and P. Jensfelt},
                note = {Conference of 2011 IEEE International Conference on Robotics and Automation, ICRA 2011; Conference Date: 9 May 2011 through 13 May 2011; Conference Code: 94261},
           booktitle = {Robotics and Automation (ICRA), 2011 IEEE International Conference on},
               title = {Home alone: autonomous extension and correction of spatial representations},
             address = {Shanghai},
           publisher = {IEEE},
                year = {2011},
             journal = {Proceedings - IEEE International Conference on Robotics and Automation},
               pages = {3907--3914},
            keywords = {ARRAY(0x7fdc781b4740)},
                 url = {http://eprints.lincoln.ac.uk/8353/},
            abstract = {In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system's performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems. {\^A}{\copyright} 2011 IEEE.}
    }
  • N. Hawes, M. Hanheide, J. Hargreaves, B. Page, H. Zender, and P. Jensfelt, “Home alone: autonomous extension and correction of spatial representations,” in 2011 IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 3907-3914.
    [BibTeX] [Abstract] [EPrints]

    In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system?s performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems.

    @inproceedings{lirolem6764,
               month = {May},
              author = {Nick Hawes and Marc Hanheide and Jack Hargreaves and Ben Page and Hendrik Zender and Patric Jensfelt},
                note = {In this paper we present an account
    of the problems faced by a mobile robot given
    an incomplete tour of an unknown environment,
    and introduce a collection of techniques which can
    generate successful behaviour even in the presence
    of such problems. Underlying our approach is the
    principle that an autonomous system must be motivated
    to act to gather new knowledge, and to validate
    and correct existing knowledge. This principle is
    embodied in Dora, a mobile robot which features
    the aforementioned techniques: shared representations,
    non-monotonic reasoning, and goal generation
    and management. To demonstrate how well this
    collection of techniques work in real-world situations
    we present a comprehensive analysis of the Dora
    system?s performance over multiple tours in an indoor
    environment. In this analysis Dora successfully
    completed 18 of 21 attempted runs, with all but
    3 of these successes requiring one or more of the
    integrated techniques to recover from problems.},
           booktitle = {2011 IEEE International Conference on Robotics and Automation (ICRA)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Home alone: autonomous extension and correction of spatial
    representations},
           publisher = {IEEE},
                year = {2011},
               pages = {3907--3914},
            keywords = {ARRAY(0x7fdc78086610)},
                 url = {http://eprints.lincoln.ac.uk/6764/},
            abstract = {In this paper we present an account
    of the problems faced by a mobile robot given
    an incomplete tour of an unknown environment,
    and introduce a collection of techniques which can
    generate successful behaviour even in the presence
    of such problems. Underlying our approach is the
    principle that an autonomous system must be motivated
    to act to gather new knowledge, and to validate
    and correct existing knowledge. This principle is
    embodied in Dora, a mobile robot which features
    the aforementioned techniques: shared representations,
    non-monotonic reasoning, and goal generation
    and management. To demonstrate how well this
    collection of techniques work in real-world situations
    we present a comprehensive analysis of the Dora
    system?s performance over multiple tours in an indoor
    environment. In this analysis Dora successfully
    completed 18 of 21 attempted runs, with all but
    3 of these successes requiring one or more of the
    integrated techniques to recover from problems.}
    }
  • A. Peters, T. P. Spexard, M. Hanheide, and P. Weiss, “Hey robot, get out of my way: survey on a spatial and situational movement concept in HRI,” in Behaviour Monitoring and Interpretation – BMI Well-being, B. Gottfried and H. Aghajan, Eds., IOS Press, 2011.
    [BibTeX] [Abstract] [EPrints]

    Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated human-robot interactions (HRI). Therefore, a robot has not only to consider social rules with respect to proxemics, it must detect in which (interaction) situation it is in and act accordingly. With respect to spatial HRI, we concentrate on the use of non-verbal communication. This chapter stresses the meaning of both, machine movements as signals towards a human and human body language. Considering these aspects will make interaction simpler and smoother. An observational study is presented to acquire a concept of spatial prompting by a robot and by a human. When a person and robot meet in a narrow hallway in order to pass by, they have to make room for each other. But how can a robot make sure that both really want to pass by instead of starting interaction? This especially concerns narrow, non-artificial surroundings. Which social signals are expected by the user and on the other side, can be generated or processed by a robot? The results will show what an appropriate passing behaviour is and how to distinguish between passage situations and others. The results shed light upon the readability of signals in spatial HRI.

    @incollection{lirolem6714,
           booktitle = {Behaviour Monitoring and Interpretation - BMI Well-being},
              editor = {B. Gottfried and H. Aghajan},
               month = {April},
               title = {Hey robot, get out of my way: survey on a spatial and situational movement concept in HRI},
              author = {Annika Peters and Thorsten P. Spexard and Marc Hanheide and Petra Weiss},
           publisher = {IOS Press},
                year = {2011},
            keywords = {ARRAY(0x7fdc780867a8)},
                 url = {http://eprints.lincoln.ac.uk/6714/},
            abstract = {Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated human-robot interactions (HRI). Therefore, a robot has not only to consider social rules with respect to proxemics, it must detect in which (interaction) situation it is in and act accordingly. With respect to spatial HRI, we concentrate on the use of non-verbal communication. This chapter stresses the meaning of both, machine movements as signals towards a human and human body language. Considering these aspects will make interaction simpler and smoother. An observational study is presented to acquire a concept of spatial prompting by a robot and by a human. When a person and robot meet in a narrow hallway in order to pass by, they have to make room for each other. But how can a robot make sure that both really want to pass by instead of starting interaction? This especially concerns narrow, non-artificial surroundings. Which social signals are expected by the user and on the other side, can be generated or processed by a robot? The results will show what an appropriate passing behaviour is and how to distinguish between passage situations and others. The results shed light upon the readability of signals in spatial HRI.}
    }
  • M. Smith, M. Shaker, S. Yue, and T. Duckett, “AltURI: a thin middleware for simulated robot vision applications,” in IEEE International Conference on Computer Science and information Technology (ICCSIT), 2011.
    [BibTeX] [Abstract] [EPrints]

    Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use.

    @inproceedings{lirolem4824,
           booktitle = {IEEE International Conference on Computer Science and information Technology (ICCSIT)},
               month = {June},
               title = {AltURI: a thin middleware for simulated robot vision applications},
              author = {Mark Smith and Marwan Shaker and Shigang Yue and Tom Duckett},
           publisher = {IEEE},
                year = {2011},
            keywords = {ARRAY(0x7fdc781b5a40)},
                 url = {http://eprints.lincoln.ac.uk/4824/},
            abstract = {Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use.}
    }
  • O. Szymanezyk, P. Dickinson, and T. Duckett, “From individual characters to large crowds: augmenting the believability of open-world games through exploring social emotion in pedestrian groups,” in Think Design Play: DiGRA Conference, 2011.
    [BibTeX] [Abstract] [EPrints]

    Crowds of non-player characters improve the game-play experiences of open-world video-games. Grouping is a common phenomenon of crowds and plays an important role in crowd behaviour. Recent crowd simulation research focuses on group modelling in pedestrian crowds and game-designers have argued that the design of non-player characters should capture and exploit the relationship between characters. The concepts of social groups and inter-character relationships are not new in social psychology, and on-going work addresses the social life of emotions and its behavioural consequences on individuals and groups alike. The aim of this paper is to provide an overview of current research in social psychology, and to use the findings as a source of inspiration to design a social network of non-player characters, with application to the problem of group modelling in simulated crowds in computer games.

    @inproceedings{lirolem4662,
           booktitle = {Think Design Play: DiGRA Conference},
               month = {September},
               title = {From individual characters to large crowds: augmenting the believability of open-world games through exploring social emotion in pedestrian groups},
              author = {Oliver Szymanezyk and Patrick Dickinson and Tom Duckett},
           publisher = {ARA Digital Media Private Limited},
                year = {2011},
            keywords = {ARRAY(0x7fdc78157860)},
                 url = {http://eprints.lincoln.ac.uk/4662/},
            abstract = {Crowds of non-player characters improve the game-play experiences of open-world video-games. Grouping is a common phenomenon of crowds and plays an important role in crowd behaviour. Recent crowd simulation research focuses on group modelling in pedestrian crowds and game-designers have argued that the design of non-player characters should capture and exploit the relationship between characters. The concepts of social groups and inter-character relationships are not new in social psychology, and on-going work addresses the social life of emotions and its behavioural consequences on individuals and groups alike. The aim of this paper is to provide an overview of current research in social psychology, and to use the findings as a source of inspiration to design a social network of non-player characters, with application to the problem of group modelling in simulated crowds in computer games.}
    }
  • O. Szymanezyk, P. Dickinson, and T. Duckett, “Towards agent-based crowd simulation in airports using games technology,” in Agent and multi-agent systems: technologies and applications, Berling Heidelberg: Springer-Verlag, 2011, vol. 6682, pp. 524-533.
    [BibTeX] [Abstract] [EPrints]

    We adapt popular video games technology for an agent-based crowd simulation in an airport terminal. To achieve this, we investigate the unique traits of airports and implement a virtual crowd by exploiting a scalable layered intelligence technique in combination with physics middleware and a socialforces approach. Our experiments show that the framework runs at interactive frame-rate and evaluate the scalability with increasing number of agents demonstrating navigation behaviour.

    @incollection{lirolem4569,
              volume = {6682},
              number = {6682},
               month = {September},
              author = {Oliver Szymanezyk and Patrick Dickinson and Tom Duckett},
              series = {Lecture Notes in Computer Science},
           booktitle = {Agent and multi-agent systems: technologies and applications},
               title = {Towards agent-based crowd simulation in airports using games technology},
             address = {Berling Heidelberg},
           publisher = {Springer-Verlag},
                year = {2011},
               pages = {524--533},
            keywords = {ARRAY(0x7fdc78157890)},
                 url = {http://eprints.lincoln.ac.uk/4569/},
            abstract = {We adapt popular video games technology for an agent-based crowd simulation in an airport terminal. To achieve this, we investigate the unique traits of airports and implement a virtual crowd by exploiting a scalable layered intelligence technique in combination with physics middleware and a socialforces approach. Our experiments show that the framework runs at interactive frame-rate and evaluate the scalability with increasing number of agents demonstrating
    navigation behaviour.}
    }
  • M. L. Walters, M. Lohse, M. Hanheide, B. Wrede, D. S. Syrdal, K. L. Koay, A. Green, H. Huttenrauch, K. Dautenhahn, G. Sagerer, and K. Severinson-Eklundh, “Evaluating the robot personality and verbal behavior of domestic robots using video-based studies,” Advanced Robotics, vol. 25, iss. 18, pp. 2233-2254, 2011.
    [BibTeX] [Abstract] [EPrints]

    Robots are increasingly being used in domestic environments and should be able to interact with inexperienced users. Human-human interaction and human-computer interaction research findings are relevant, but often limited because robots are different from both humans and computers. Therefore, new human-robot interaction (HRI) research methods can inform the design of robots suitable for inexperienced users. A video-based HRI (VHRI) methodology was here used to carry out a multi-national HRI user study for the prototype domestic robot BIRON (BIelefeld RObot companioN). Previously, the VHRI methodology was used in constrained HRI situations, while in this study HRIs involved a series of events as part of a ‘hometour’ scenario. Thus, the present work is the first study of this methodology in extended HRI contexts with a multi-national approach. Participants watched videos of the robot interacting with a human actor and rated two robot behaviors (Extrovert and Introvert). Participants’ perceptions and ratings of the robot’s behaviors differed with regard to both verbal interactions and person following by the robot. The study also confirms that the VHRI methodology provides a valuable means to obtain early user feedback, even before fully working prototypes are available. This can usefully guide the future design work on robots, and associated verbal and non-verbal behaviors.

    @article{lirolem6560,
              volume = {25},
              number = {18},
               month = {December},
              author = {Michael L. Walters and Manja Lohse and Marc Hanheide and Britte Wrede and Dag Sverre Syrdal and Kheng Lee Koay and Anders Green and Helge Huttenrauch and Kerstin Dautenhahn and Gerhard Sagerer and Kerstin Severinson-Eklundh},
                note = {Robots are increasingly being used in domestic environments and should be able to interact with inexperienced users. Human-human interaction and human-computer interaction research findings are relevant, but often limited because robots are different from both humans and computers. Therefore, new human-robot interaction (HRI) research methods can inform the design of robots suitable for inexperienced users. A video-based HRI (VHRI) methodology was here used to carry out a multi-national HRI user study for the prototype domestic robot BIRON (BIelefeld RObot companioN). Previously, the VHRI methodology was used in constrained HRI situations, while in this study HRIs involved a series of events as part of a 'hometour' scenario. Thus, the present work is the first study of this methodology in extended HRI contexts with a multi-national approach. Participants watched videos of the robot interacting with a human actor and rated two robot behaviors (Extrovert and Introvert). Participants' perceptions and ratings of the robot's behaviors differed with regard to both verbal interactions and person following by the robot. The study also confirms that the VHRI methodology provides a valuable means to obtain early user feedback, even before fully working prototypes are available. This can usefully guide the future design work on robots, and associated verbal and non-verbal behaviors.},
               title = {Evaluating the robot personality and verbal behavior of domestic robots using video-based studies},
           publisher = {Taylor \& Francis},
                year = {2011},
             journal = {Advanced Robotics},
               pages = {2233--2254},
            keywords = {ARRAY(0x7fdc77fd24a0)},
                 url = {http://eprints.lincoln.ac.uk/6560/},
            abstract = {Robots are increasingly being used in domestic environments and should be able to interact with inexperienced users. Human-human interaction and human-computer interaction research findings are relevant, but often limited because robots are different from both humans and computers. Therefore, new human-robot interaction (HRI) research methods can inform the design of robots suitable for inexperienced users. A video-based HRI (VHRI) methodology was here used to carry out a multi-national HRI user study for the prototype domestic robot BIRON (BIelefeld RObot companioN). Previously, the VHRI methodology was used in constrained HRI situations, while in this study HRIs involved a series of events as part of a 'hometour' scenario. Thus, the present work is the first study of this methodology in extended HRI contexts with a multi-national approach. Participants watched videos of the robot interacting with a human actor and rated two robot behaviors (Extrovert and Introvert). Participants' perceptions and ratings of the robot's behaviors differed with regard to both verbal interactions and person following by the robot. The study also confirms that the VHRI methodology provides a valuable means to obtain early user feedback, even before fully working prototypes are available. This can usefully guide the future design work on robots, and associated verbal and non-verbal behaviors.}
    }
  • S. Yue, H. Wei, M. Li, Q. Liang, and L. Wang, “ICNC-FSKD 2010 special issue on computers & mathematics in natural computation and knowledge discovery,” Computers and Mathematics with Applications, vol. 62, iss. 7, pp. 2683-2684, 2011.
    [BibTeX] [Abstract] [EPrints]

    Natural computation, as an exciting and emerging interdisciplinary field, has been witnessing a surge of newly developed theories, methodologies and applications in recent years. These innovations have generated a huge impact in tackling complex and challenging real world problems. Not only are the well established intelligent techniques, such as neural networks, fuzzy systems, genetic and evolutionary algorithms, and cellular automata expanding to new application areas; the new forms of natural computation that have emerged recently, for example, swarm intelligence, artificial immune systems, bio-molecular computing and membrane computing, quantum computing, and granular computing, are also providing additional tools for various applications. One attractive area that natural computation has been playing a major role in is knowledge discovery. There are many success stories on natural computation and knowledge discovery, as you will find out in this special issue

    @article{lirolem10329,
              volume = {62},
              number = {7},
               month = {October},
              author = {Shigang Yue and Hua-Liang Wei and Maozhen Li and Qilian Liang and Lipo Wang},
               title = {ICNC-FSKD 2010 special issue on computers \& mathematics in natural computation and knowledge discovery},
           publisher = {Elsevier},
                year = {2011},
             journal = {Computers and Mathematics with Applications},
               pages = {2683--2684},
            keywords = {ARRAY(0x7fdc78049560)},
                 url = {http://eprints.lincoln.ac.uk/10329/},
            abstract = {Natural computation, as an exciting and emerging interdisciplinary field, has been witnessing a surge of newly developed theories, methodologies and applications in recent years. These innovations have generated a huge impact in tackling complex and challenging real world problems. Not only are the well established intelligent techniques, such as neural networks, fuzzy systems, genetic and evolutionary algorithms, and cellular automata expanding to new application areas; the new forms of natural computation that have emerged recently, for example, swarm intelligence, artificial immune systems, bio-molecular computing and membrane computing, quantum computing, and granular computing, are also providing additional tools for various applications. One attractive area that natural computation has been playing a major role in is knowledge discovery. There are many success stories on natural computation and knowledge discovery, as you will find out in this special issue}
    }

2010

  • F. Arvin, S. Doraisamy, K. Samsudin, and A. R. Ramli, “Self-localization of swarm robots based on voice signal acquisition,” in International Conference on Computer and Communication Engineering (ICCCE), 2010, pp. 1-5.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an acoustical signal tracking experiment by swarm mobile robots. Biological swarm is a fascinating behavior of nature which was inspired from social insects’ behavior. A mobile robot that is designed as a swarm robotic platform was employed for implementing voice exploration behavior. An additional module was developed to connect to robots for processing given voice signals using the proportional signal strength approach that estimates orientation of sound source using fuzzy logic approach. The voice processor module utilizes four condenser microphones with around -47db sensitivity which are placed in different directions of the circuit board for capturing surrounding sound signals. Captured samples by microphones are processed to estimate the relative positions of the sound source in the robotic environment. After estimating the position of the signal’s source, all participants move towards similar to the insects’ colony. The participant robots have an individual task for the estimation of source location from captured samples. Moreover, according to the swarm definition, an additional cooperation between swarm participants is required to achieve a correct colony of robots. Obtained results illustrate the feasibility of the proposed technique and hardware interface for sound signals acquisition with swarm robots.

    @inproceedings{lirolem11356,
           booktitle = {International Conference on Computer and Communication Engineering (ICCCE)},
               month = {May},
               title = {Self-localization of swarm robots based on voice signal acquisition},
              author = {Farshad Arvin and Shyamala Doraisamy and Khairulmizam Samsudin and Abdul Rahman Ramli},
           publisher = {IEEE},
                year = {2010},
               pages = {1--5},
            keywords = {ARRAY(0x7fdc781a5c90)},
                 url = {http://eprints.lincoln.ac.uk/11356/},
            abstract = {This paper presents an acoustical signal tracking experiment by swarm mobile robots. Biological swarm is a fascinating behavior of nature which was inspired from social insects' behavior. A mobile robot that is designed as a swarm robotic platform was employed for implementing voice exploration behavior. An additional module was developed to connect to robots for processing given voice signals using the proportional signal strength approach that estimates orientation of sound source using fuzzy logic approach. The voice processor module utilizes four condenser microphones with around -47db sensitivity which are placed in different directions of the circuit board for capturing surrounding sound signals. Captured samples by microphones are processed to estimate the relative positions of the sound source in the robotic environment. After estimating the position of the signal's source, all participants move towards similar to the insects' colony. The participant robots have an individual task for the estimation of source location from captured samples. Moreover, according to the swarm definition, an additional cooperation between swarm participants is required to achieve a correct colony of robots. Obtained results illustrate the feasibility of the proposed technique and hardware interface for sound signals acquisition with swarm robots.}
    }
  • F. Arvin, K. Samsudin, and R. Ramli, “Development of IR-based short-range communication techniques for swarm robot applications,” Advances in Electrical and Computer Engineering, vol. 10, iss. 4, pp. 61-68, 2010.
    [BibTeX] [Abstract] [EPrints]

    This paper proposes several designs for a reliable infra-red based communication techniques for swarm robotic applications. The communication system was deployed on an autonomous miniature mobile robot (AMiR), a swarm robotic platform developed earlier. In swarm applications, all participating robots must be able to communicate and share data. Hence a suitable communication medium and a reliable technique are required. This work uses infrared radiation for transmission of swarm robots messages. Infrared transmission methods such as amplitude and frequency modulations will be presented along with experimental results. Finally the effects of the modulation techniques and other parameters on collective behavior of swarm robots will be analyzed.

    @article{lirolem5796,
              volume = {10},
              number = {4},
               month = {December},
              author = {Farshad Arvin and Khairulmizam Samsudin and Rahman Ramli},
               title = {Development of IR-based short-range communication techniques for swarm robot applications},
           publisher = {AECE},
                year = {2010},
             journal = {Advances in Electrical and Computer Engineering},
               pages = {61--68},
            keywords = {ARRAY(0x7fdc780822a8)},
                 url = {http://eprints.lincoln.ac.uk/5796/},
            abstract = {This paper proposes several designs for a reliable infra-red based communication techniques for swarm robotic applications. The communication system was deployed on an autonomous miniature mobile robot (AMiR), a swarm robotic platform developed earlier. In swarm applications, all participating robots must be able to communicate and share data. Hence a suitable communication medium and a reliable technique are required. This work uses infrared radiation for transmission of swarm robots messages. Infrared transmission methods such as amplitude and frequency modulations will be presented along with experimental results. Finally the effects of the modulation techniques and other parameters on collective behavior of swarm robots will be analyzed.}
    }
  • F. Arvin and S. Doraisamy, “Heart sound musical transcription technique using multi-Level preparation,” International Review on Computers and Software (I.Re.Co.S.), vol. 5, iss. 6, pp. 595-600, 2010.
    [BibTeX] [Abstract] [EPrints]

    Musical transcription of heart sound is a new idea to provide a textual biomedical database. Textual database allows applying several indexing and searching techniques in order to monitor patient behavior for a long duration. MIDI commands produce a semi-structural musical file format which enables to apply various applications. Main objective of this paper is the extraction of fundamental frequency of the given heart sound which is recorded with an electrical stethoscope. Based on extracted fundamental frequencies, the logarithmical relationship of pitch numbers will be estimated. Generally the captured heart sound includes several types of noises such as other organs sound and ambient voice. Hence, filtering of the heart sound is indispensable. Thus, three levels of preparation techniques which are wavelet transform, frequency limitation, and amplitude reconstruction will be applied on the heart sound sequentially. The results of the performed experiments show the accuracy of approximately 93\% $\pm$2. The statistical analyses illustrated that each level of the preparation, significantly improved the accuracy of the transcription (p \ensuremath< 0.005).

    @article{lirolem6087,
              volume = {5},
              number = {6},
               month = {November},
              author = {Farshad Arvin and Shyamala Doraisamy},
               title = {Heart sound musical transcription technique using multi-Level preparation},
           publisher = {Praise Worthy Prize},
                year = {2010},
             journal = {International Review on Computers and Software (I.Re.Co.S.)},
               pages = {595--600},
            keywords = {ARRAY(0x7fdc78086520)},
                 url = {http://eprints.lincoln.ac.uk/6087/},
            abstract = {Musical transcription of heart sound is a new idea to provide a textual biomedical database. Textual database allows applying several indexing and searching techniques in order to monitor patient behavior for a long duration. MIDI commands produce a semi-structural musical file format which enables to apply various applications. Main objective of this paper is the extraction of fundamental frequency of the given heart sound which is recorded with an electrical stethoscope. Based on extracted fundamental frequencies, the logarithmical relationship of pitch numbers will be estimated. Generally the captured heart sound includes several types of noises such as other organs sound and ambient voice. Hence, filtering of the heart sound is indispensable. Thus, three levels of preparation techniques which are wavelet transform, frequency limitation, and amplitude reconstruction will be applied on the heart sound sequentially. The results of the performed experiments show the accuracy of approximately 93\% {$\pm$}2. The statistical analyses illustrated that each level of the preparation, significantly improved the accuracy of the transcription (p {\ensuremath{<}} 0.005).}
    }
  • M. Barnes, T. Duckett, G. Cielniak, G. Stroud, and G. Harper, “Visual detection of blemishes in potatoes using minimalist boosted classifiers,” Journal of Food Engineering, vol. 98, iss. 3, pp. 339-346, 2010.
    [BibTeX] [Abstract] [EPrints]

    This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and non-blemishes. With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build “minimalist” classifiers that optimise detection performance at low computational cost. In experiments, blemish detectors were trained for both white and red potato varieties, achieving 89.6$\backslash$\% and 89.5$\backslash$\% accuracy, respectively.

    @article{lirolem2206,
              volume = {98},
              number = {3},
               month = {June},
              author = {Michael Barnes and Tom Duckett and Grzegorz Cielniak and Graeme Stroud and Glyn Harper},
                note = {This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image.
    A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted.
    Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and non-blemishes.
    With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc.
    The results show that the method is able to build ``minimalist'' classifiers that optimise detection performance at low computational cost.
    In experiments, blemish detectors were trained for both white and red potato varieties, achieving 89.6{$\backslash$}\% and 89.5{$\backslash$}\% accuracy, respectively.},
               title = {Visual detection of blemishes in potatoes using minimalist boosted classifiers},
           publisher = {Elsevier},
                year = {2010},
             journal = {Journal of Food Engineering},
               pages = {339--346},
            keywords = {ARRAY(0x7fdc7812e738)},
                 url = {http://eprints.lincoln.ac.uk/2206/},
            abstract = {This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image.
    A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted.
    Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and non-blemishes.
    With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc.
    The results show that the method is able to build ``minimalist'' classifiers that optimise detection performance at low computational cost.
    In experiments, blemish detectors were trained for both white and red potato varieties, achieving 89.6{$\backslash$}\% and 89.5{$\backslash$}\% accuracy, respectively.}
    }
  • M. Barnes, G. Cielniak, and T. Duckett, “Minimalist AdaBoost for blemish identification in potatoes,” in International Conference on Computer Vision and Graphics 2010, 2010, pp. 209-216.
    [BibTeX] [Abstract] [EPrints]

    We present a multi-class solution based on minimalist Ad- aBoost for identifying blemishes present in visual images of potatoes. Using training examples we use Real AdaBoost to rst reduce the fea- ture set by selecting ve features for each class, then train binary clas- siers for each class, classifying each testing example according to the binary classier with the highest certainty. Against hand-drawn ground truth data we achieve a pixel match of 83\% accuracy in white potatoes and 82\% in red potatoes. For the task of identifying which blemishes are present in each potato within typical industry dened criteria (10\% coverage) we achieve accuracy rates of 93\% and 94\%, respectively.

    @inproceedings{lirolem5517,
               month = {September},
              author = {Michael Barnes and Grzegorz Cielniak and Tom Duckett},
                note = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume 6374 LNCS, Issue PART 1, 2010, Pages 209-216},
           booktitle = {International Conference on Computer Vision and Graphics 2010},
               title = {Minimalist AdaBoost for blemish identification
    in potatoes},
           publisher = {Springer},
               pages = {209--216},
                year = {2010},
            keywords = {ARRAY(0x7fdc78033dd8)},
                 url = {http://eprints.lincoln.ac.uk/5517/},
            abstract = {We present a multi-class solution based on minimalist Ad-
    aBoost for identifying blemishes present in visual images of potatoes.
    Using training examples we use Real AdaBoost to rst reduce the fea-
    ture set by selecting ve features for each class, then train binary clas-
    siers for each class, classifying each testing example according to the
    binary classier with the highest certainty. Against hand-drawn ground
    truth data we achieve a pixel match of 83\% accuracy in white potatoes
    and 82\% in red potatoes. For the task of identifying which blemishes
    are present in each potato within typical industry dened criteria (10\%
    coverage) we achieve accuracy rates of 93\% and 94\%, respectively.}
    }
  • N. Bellotto and H. Hu, “Computationally efficient solutions for tracking people with a mobile robot: an experimental evaluation of Bayesian filters,” Autonomous Robots, vol. 28, iss. 4, pp. 425-438, 2010.
    [BibTeX] [Abstract] [EPrints]

    Modern service robots will soon become an essential part of modern society. As they have to move and act in human environments, it is essential for them to be provided with a fast and reliable tracking system that localizes people in the neighbourhood. It is therefore important to select the most appropriate filter to estimate the position of these persons. This paper presents three efficient implementations of multisensor-human tracking based on different Bayesian estimators: Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Sampling Importance Resampling (SIR) particle filter. The system implemented on a mobile robot is explained, introducing the methods used to detect and estimate the position of multiple people. Then, the solutions based on the three filters are discussed in detail. Several real experiments are conducted to evaluate their performance, which is compared in terms of accuracy, robustness and execution time of the estimation. The results show that a solution based on the UKF can perform as good as particle filters and can be often a better choice when computational efficiency is a key issue.

    @article{lirolem2286,
              volume = {28},
              number = {4},
               month = {May},
              author = {Nicola Bellotto and Huosheng Hu},
               title = {Computationally efficient solutions for tracking people with a mobile robot: an experimental evaluation of Bayesian filters},
           publisher = {Springer},
                year = {2010},
             journal = {Autonomous Robots},
               pages = {425--438},
            keywords = {ARRAY(0x7fdc78146f40)},
                 url = {http://eprints.lincoln.ac.uk/2286/},
            abstract = {Modern service robots will soon become an essential part of modern society. As they have to move and act in human environments, it is essential for them to be provided with a fast and reliable tracking system that localizes people in the neighbourhood. It is therefore important to select the most appropriate filter to estimate the position of these persons.
    This paper presents three efficient implementations of multisensor-human tracking based on different Bayesian estimators: Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Sampling Importance Resampling (SIR) particle filter. The system implemented on a mobile robot is explained, introducing the methods used to detect and estimate the position of multiple people. Then, the solutions based on the three filters are discussed in detail. Several real experiments are conducted to evaluate their performance, which is compared in terms of accuracy, robustness and execution time of the estimation. The results show that a solution based on the UKF can perform as good as particle filters and can be often a better choice when computational efficiency is a key issue.}
    }
  • N. Bellotto and H. Hu, “A bank of unscented Kalman filters for multimodal human perception with mobile service robots,” International Journal of Social Robotics, vol. 2, iss. 2, pp. 121-136, 2010.
    [BibTeX] [Abstract] [EPrints]

    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot’s perception and recognition of humans, providing a useful contribution for the future application of service robotics.

    @article{lirolem2566,
              volume = {2},
              number = {2},
               month = {June},
              author = {Nicola Bellotto and Huosheng Hu},
                note = {A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints.
    In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot.
    Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics.},
               title = {A bank of unscented Kalman filters for multimodal human perception with mobile service robots},
           publisher = {Springer},
                year = {2010},
             journal = {International Journal of Social Robotics},
               pages = {121--136},
            keywords = {ARRAY(0x7fdc7818c7d0)},
                 url = {http://eprints.lincoln.ac.uk/2566/},
            abstract = {A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints.
    In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot.
    Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics.}
    }
  • G. Cielniak, T. Duckett, and A. J. Lilienthal, “Data association and occlusion handling for vision-based people tracking by mobile robots,” Robotics and Autonomous Systems, vol. 58, iss. 5, pp. 435-443, 2010.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets.

    @article{lirolem2277,
              volume = {58},
              number = {5},
               month = {May},
              author = {Grzegorz Cielniak and Tom Duckett and Achim J. Lilienthal},
               title = {Data association and occlusion handling for vision-based people tracking by mobile robots},
           publisher = {Elsevier B.V.},
                year = {2010},
             journal = {Robotics and Autonomous Systems},
               pages = {435--443},
            keywords = {ARRAY(0x7fdc78030d48)},
                 url = {http://eprints.lincoln.ac.uk/2277/},
            abstract = {This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets.}
    }
  • H. Cuayahuitl, S. Renals, O. Lemon, and H. Shimodaira, “Evaluation of a hierarchical reinforcement learning spoken dialogue system,” Computer Speech & Language, vol. 24, iss. 2, pp. 395-429, 2010.
    [BibTeX] [Abstract] [EPrints]

    We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ?Precision-Recall?. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20\%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems.

    @article{lirolem22208,
              volume = {24},
              number = {2},
               month = {April},
              author = {Heriberto Cuayahuitl and Steve Renals and Oliver Lemon and Hiroshi Shimodaira},
               title = {Evaluation of a hierarchical reinforcement learning spoken dialogue system},
           publisher = {Elsevier for International Speech Communication Association (ISCA)},
                year = {2010},
             journal = {Computer Speech \& Language},
               pages = {395--429},
            keywords = {ARRAY(0x7fdc7811e068)},
                 url = {http://eprints.lincoln.ac.uk/22208/},
            abstract = {We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ?Precision-Recall?. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20\%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems.}
    }
  • F. Dayoub, T. Duckett, and G. Cielniak, “Toward an object-based semantic memory for long-term operation of mobile service robots,” in Workshop on Semantic Mapping and Autonomous Knowledge Acquisition, 2010.
    [BibTeX] [Abstract] [EPrints]

    Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.

    @inproceedings{lirolem3866,
           booktitle = {Workshop on Semantic Mapping and Autonomous Knowledge Acquisition},
               month = {October},
               title = {Toward an object-based semantic memory for long-term operation of mobile service robots},
              author = {Feras Dayoub and Tom Duckett and Grzegorz Cielniak},
                year = {2010},
                note = {Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.},
            keywords = {ARRAY(0x7fdc78082800)},
                 url = {http://eprints.lincoln.ac.uk/3866/},
            abstract = {Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time.}
    }
  • F. Dayoub, T. Duckett, and G. Cielniak, “Short- and long-term adaptation of visual place memories for mobile robots,” in International Symposium on Remembering Who We Are – Human Memory for Artificial Agents – A Symposium at the AISB 2010 Convention, Leicester, 2010, pp. 21-26.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a robotic implementation of a human-inspired memory model for long-term adaptation of spatial maps for navigation in changing environments. The robot uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the robots current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the map may become out of date after some time. To solve this problem the robot needs to adapt the map continually in response to the changing appearance of the environment. In this work we use local features extracted from panoramic images to represent the appearance of the environment. Adopting concepts of short-term and long-term memory, our method updates the group of feature points for the image representation of a particular place. Experiments using robot sensor data collected over a period of 2 months show that the implemented model is able to adapt successfully to changes.

    @inproceedings{lirolem10036,
               month = {March},
              author = {Feras Dayoub and Tom Duckett and Grzegorz Cielniak},
                note = {cConference of org.apache.xalan.xsltc.dom.DOMAdapter@47716bee ; Conference Date: org.apache.xalan.xsltc.dom.DOMAdapter@6764fae6 Through org.apache.xalan.xsltc.dom.DOMAdapter@16944712; Conference Code:90743},
           booktitle = {International Symposium on Remembering Who We Are - Human Memory for Artificial Agents - A Symposium at the AISB 2010 Convention},
             address = {Leicester},
               title = {Short- and long-term adaptation of visual place memories for mobile robots},
                year = {2010},
             journal = {Proceedings of the International Symposium on Remembering Who We Are - Human Memory for Artificial Agents - A Symposium at the AISB 2010 Convention},
               pages = {21--26},
            keywords = {ARRAY(0x7fdc780bf888)},
                 url = {http://eprints.lincoln.ac.uk/10036/},
            abstract = {This paper presents a robotic implementation of a human-inspired memory model for long-term adaptation of spatial maps for navigation in changing environments. The robot uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the robots current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the map may become out of date after some time. To solve this problem the robot needs to adapt the map continually in response to the changing appearance of the environment. In this work we use local features extracted from panoramic images to represent the appearance of the environment. Adopting concepts of short-term and long-term memory, our method updates the group of feature points for the image representation of a particular place. Experiments using robot sensor data collected over a period of 2 months show that the implemented model is able to adapt successfully to changes.}
    }
  • S. Gieselmann, M. Hanheide, and B. Wrede, “Remembering interaction episodes: an unsupervised learning approach for a humanoid robot,” in Conference of 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, Nashville, TN, 2010, pp. 566-571.
    [BibTeX] [Abstract] [EPrints]

    In this paper we will present a new approach to give a robot the capability to recognize already seen people and to remember details about past interactions. These details are time, length, location(GPS) and involved people of one interaction. Furthermore all features of this system work unsupervised. This means that the robot itself decides e.g. when and which person is important to remember or when an interaction starts. Out of these collected data additional information can be learned. For example a social network is build up which contains how often different people were seen together in the same interaction. Â\copyright2010 IEEE.

    @inproceedings{lirolem8321,
               month = {December},
              author = {S. Gieselmann and Marc Hanheide and B. Wrede},
                note = {Conference Code: 83761},
           booktitle = {Conference of 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010},
               title = {Remembering interaction episodes: an unsupervised learning approach for a humanoid robot},
             address = {Nashville, TN},
           publisher = {IEEE},
                year = {2010},
             journal = {2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010},
               pages = {566--571},
            keywords = {ARRAY(0x7fdc78086748)},
                 url = {http://eprints.lincoln.ac.uk/8321/},
            abstract = {In this paper we will present a new approach to give a robot the capability to recognize already seen people and to remember details about past interactions. These details are time, length, location(GPS) and involved people of one interaction. Furthermore all features of this system work unsupervised. This means that the robot itself decides e.g. when and which person is important to remember or when an interaction starts. Out of these collected data additional information can be learned. For example a social network is build up which contains how often different people were seen together in the same interaction. {\^A}{\copyright}2010 IEEE.}
    }
  • R. Golombek, S. Wrede, M. Hanheide, and M. Heckmann, “Learning a probabilistic self-awareness model for robotic systems,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, 2010, pp. 2745-2750.
    [BibTeX] [Abstract] [EPrints]

    In order to address the problem of failure detection in the robotics domain, we present in this contribution a so-called self-awareness model, based on the system’s internal data exchange and the inherent dynamics of inter-component communication. The model is strongly data driven and provides an anomaly detector for robotics systems both applicable in-situ at runtime as well as a-posteriori in post-mortem analysis. Current architectures or methods for failure detection in autonomous robots are either implementations of watch dog concepts or are based on excessive amounts of domain-specific error detection code. The approach presented in this contribution provides an avenue for the detection of more subtle anomalies originating from external sources such as the environment itself or system failures such as resource starvation. Additionally, developers are alleviated from explicitly modeling and foreseeing every exceptional situation, instead training the presented probabilistic model with the known normal modes within the specification of the robot system. As we developed and evaluated the self-awareness model on a mobile robot platform featuring an event-driven software architecture, the presented method can easily be applied in other current robotics software architectures. Â\copyright2010 IEEE.

    @inproceedings{lirolem8322,
               month = {October},
              author = {R. Golombek and S. Wrede and M. Hanheide and M. Heckmann},
                note = {cited By (since 1996) 0; Conference of 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010; Conference Date: 18 October 2010 through 22 October 2010; Conference Code: 83389},
           booktitle = {2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
             address = {Taipei},
               title = {Learning a probabilistic self-awareness model for robotic systems},
                year = {2010},
             journal = {IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings},
               pages = {2745--2750},
            keywords = {ARRAY(0x7fdc781b1440)},
                 url = {http://eprints.lincoln.ac.uk/8322/},
            abstract = {In order to address the problem of failure detection in the robotics domain, we present in this contribution a so-called self-awareness model, based on the system's internal data exchange and the inherent dynamics of inter-component communication. The model is strongly data driven and provides an anomaly detector for robotics systems both applicable in-situ at runtime as well as a-posteriori in post-mortem analysis. Current architectures or methods for failure detection in autonomous robots are either implementations of watch dog concepts or are based on excessive amounts of domain-specific error detection code. The approach presented in this contribution provides an avenue for the detection of more subtle anomalies originating from external sources such as the environment itself or system failures such as resource starvation. Additionally, developers are alleviated from explicitly modeling and foreseeing every exceptional situation, instead training the presented probabilistic model with the known normal modes within the specification of the robot system. As we developed and evaluated the self-awareness model on a mobile robot platform featuring an event-driven software architecture, the presented method can easily be applied in other current robotics software architectures. {\^A}{\copyright}2010 IEEE.}
    }
  • K. Harmer, S. Yue, K. Guo, K. Adams, and A. Hunter, “Automatic blush detection in "concealed information" test using visual stimuli,” in 2010 International Conference of Soft Computing and Pattern Recognition, 2010, pp. 259-264.
    [BibTeX] [Abstract] [EPrints]

    Blushing has been identified as an indicator of deception, shame, anxiety and embarrassment. Although normally associated with the skin coloration of the face, a blush response also affects skin surface temperature. In this paper, an approach to detect a blush response automatically is presented using the Argus P7225 thermal camera from e2v. The algorithm was tested on a sample population of 51 subjects, while using visual stimuli to elicit a response, and achieved recognition rates of \texttt\char12677\% TPR and \texttt\char12660\% TNR.

    @inproceedings{lirolem9709,
           booktitle = {2010 International Conference of Soft Computing and Pattern Recognition},
               month = {December},
               title = {Automatic blush detection in "concealed information" test using visual stimuli},
              author = {K. Harmer and Shigang Yue and Kun Guo and Karen Adams and Andrew Hunter},
           publisher = {IEEE / Institute of Electrical and Electronics Engineers Incorporated},
                year = {2010},
               pages = {259--264},
            keywords = {ARRAY(0x7fdc7803b0a0)},
                 url = {http://eprints.lincoln.ac.uk/9709/},
            abstract = {Blushing has been identified as an indicator of deception, shame, anxiety and embarrassment. Although normally associated with the skin coloration of the face, a blush response also affects skin surface temperature. In this paper, an approach to detect a blush response automatically is presented using the Argus P7225 thermal camera from e2v. The algorithm was tested on a sample population of 51 subjects, while using visual stimuli to elicit a response, and achieved recognition rates of {\texttt{\char126}}77\% TPR and {\texttt{\char126}}60\% TNR.}
    }
  • N. Hawes and M. Hanheide, “CAST: Middleware for memory-based architecture,” in Conference of 2010 AAAI Workshop; Conference, Atlanta, GA, 2010, pp. 11-12.
    [BibTeX] [Abstract] [EPrints]

    .

    @inproceedings{lirolem8319,
              volume = {WS-10-},
               month = {July},
              author = {N. Hawes and Marc Hanheide},
                note = {Conference Code: 85345},
           booktitle = {Conference of 2010 AAAI Workshop; Conference},
               title = {CAST: Middleware for memory-based architecture},
             address = {Atlanta, GA},
           publisher = {AAAI - Association for the Advancement of Artificial Intelligence},
                year = {2010},
             journal = {AAAI Workshop - Technical Report},
               pages = {11--12},
            keywords = {ARRAY(0x7fdc780ff720)},
                 url = {http://eprints.lincoln.ac.uk/8319/},
            abstract = {.}
    }
  • P. Holthaus, I. Lutkebohle, M. Hanheide, and S. Wachsmuth, “Can I help you? A spatial attention system for a receptionist robot,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6414 L, pp. 325-334, 2010.
    [BibTeX] [Abstract] [EPrints]

    Social interaction between humans takes place in the spatial dimension on a daily basis. We occupy space for ourselves and respect the dynamics of spaces that are occupied by others. In human-robot interaction, the focus has been on other topics so far. Therefore, this work applies a spatial model to a humanoid robot and implements an attention system that is connected to it. The resulting behaviors have been verified in an on-line video study. The questionnaire revealed that these behaviors are applicable and result in a robot that has been perceived as more interested in the human and shows its attention and intentions to a higher degree. Â\copyright 2010 Springer-Verlag.

    @article{lirolem8317,
              volume = {6414 L},
               month = {November},
              author = {Patrick Holthaus and Ingo Lutkebohle and Marc Hanheide and Sven Wachsmuth},
                note = {Second International Conference on Social Robotics, ICSR 2010, Singapore, November 23-24, 2010. Conference Date: 23 November 2010 through 24 November 2010; Conference Code: 82714},
             address = {Singapore},
               title = {Can I help you? A spatial attention system for a receptionist robot},
           publisher = {Springer},
                year = {2010},
             journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
               pages = {325--334},
            keywords = {ARRAY(0x7fdc78086760)},
                 url = {http://eprints.lincoln.ac.uk/8317/},
            abstract = {Social interaction between humans takes place in the spatial dimension on a daily basis. We occupy space for ourselves and respect the dynamics of spaces that are occupied by others. In human-robot interaction, the focus has been on other topics so far. Therefore, this work applies a spatial model to a humanoid robot and implements an attention system that is connected to it. The resulting behaviors have been verified in an on-line video study. The questionnaire revealed that these behaviors are applicable and result in a robot that has been perceived as more interested in the human and shows its attention and intentions to a higher degree. {\^A}{\copyright} 2010 Springer-Verlag.}
    }
  • C. Lang, S. Wachsmuth, H. Wersing, and M. Hanheide, “Facial expressions as feedback cue in human-robot interaction: a comparison between human and automatic recognition performances,” in Conference of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition – Workshops, CVPRW 2010, San Francisco, CA, 2010, pp. 79-85.
    [BibTeX] [Abstract] [EPrints]

    Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance. Â\copyright 2010 IEEE.

    @inproceedings{lirolem8323,
               month = {June},
              author = {C. Lang and S. Wachsmuth and H. Wersing and Marc Hanheide},
                note = {Conference Code: 81678},
           booktitle = {Conference of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010},
             address = {San Francisco, CA},
               title = {Facial expressions as feedback cue in human-robot interaction: a comparison between human and automatic recognition performances},
                year = {2010},
             journal = {2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010},
               pages = {79--85},
            keywords = {ARRAY(0x7fdc78121c98)},
                 url = {http://eprints.lincoln.ac.uk/8323/},
            abstract = {Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance. {\^A}{\copyright} 2010 IEEE.}
    }
  • C. Lang, S. Wachsmuth, H. Wersing, and M. Hanheide, “Facial expressions as feedback cue in human-robot interaction – a comparison between human and automatic recognition performances,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 79-85.
    [BibTeX] [Abstract] [EPrints]

    Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance.

    @inproceedings{lirolem6913,
               month = {June},
              author = {Christian Lang and Sven Wachsmuth and Heiko Wersing and Marc Hanheide},
                note = {Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance.},
           booktitle = {Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops (CVPRW)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Facial expressions as feedback cue in human-robot interaction - a comparison between human and automatic recognition performances},
           publisher = {IEEE},
                year = {2010},
               pages = {79--85},
            keywords = {ARRAY(0x7fdc780521b8)},
                 url = {http://eprints.lincoln.ac.uk/6913/},
            abstract = {Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance.}
    }
  • J. Li, A. Lilienthal, and T. Duckett, “A visual-guided data collection system for learning from demonstration in mobile robotics,” in ICINA 2010 – 2010 International Conference on Information, Networking and Automation, Kunming, 2010, p. V1289–V1293.
    [BibTeX] [Abstract] [EPrints]

    Robot learning from demonstration (LID) requires data collection for mapping the sensory states to motion action, which plays a significant role in the learning efficiency and effectiveness. This paper presents a visual-guided data collection system that allows a human demonstrator to teleoperate or to visually guide a mobile robot for the required behaviors, when simultaneously recording the sensory-motor training examples within LID. In the teleoperation mode, the human demonstrator can teleoperate the robot through a GUI that consists of the velocity control and sensory-motor recording commands with the monitoring windows for sonar, laser and visual image. In the visual-guided mode, the human demonstrator uses a green can as the command stick that is tracked by a pan-tilt-zoom (PTZ) camera. The system is implemented on a Peoplebot robot. Experiments show that both demonstration modes of the framework provide an user-friendly interface of data collection for the subsequent learning process of the robot. Â\copyright 2010 IEEE.

    @inproceedings{lirolem10023,
              volume = {1},
               month = {October},
              author = {J. Li and A. Lilienthal and Tom Duckett},
                note = {Conference Code:82996},
           booktitle = {ICINA 2010 - 2010 International Conference on Information, Networking and Automation},
             address = {Kunming},
               title = {A visual-guided data collection system for learning from demonstration in mobile robotics},
                year = {2010},
             journal = {ICINA 2010 - 2010 International Conference on Information, Networking and Automation, Proceedings},
               pages = {V1289--V1293},
            keywords = {ARRAY(0x7fdc78086598)},
                 url = {http://eprints.lincoln.ac.uk/10023/},
            abstract = {Robot learning from demonstration (LID) requires data collection for mapping the sensory states to motion action, which plays a significant role in the learning efficiency and effectiveness. This paper presents a visual-guided data collection system that allows a human demonstrator to teleoperate or to visually guide a mobile robot for the required behaviors, when simultaneously recording the sensory-motor training examples within LID. In the teleoperation mode, the human demonstrator can teleoperate the robot through a GUI that consists of the velocity control and sensory-motor recording commands with the monitoring windows for sonar, laser and visual image. In the visual-guided mode, the human demonstrator uses a green can as the command stick that is tracked by a pan-tilt-zoom (PTZ) camera. The system is implemented on a Peoplebot robot. Experiments show that both demonstration modes of the framework provide an user-friendly interface of data collection for the subsequent learning process of the robot. {\^A}{\copyright} 2010 IEEE.}
    }
  • J. Li, B. Wang, and T. Duckett, “A data collection framework for learning from demonstration in mobile robotics,” in IASTED International Conference on Robotics and Applications, RA 2010, Cambridge, MA, 2010, pp. 137-142.
    [BibTeX] [Abstract] [EPrints]

    Robot learning from demonstration (LfD) requires data collection for mapping the sensory states to motion action, which plays a significant role in the learning efficiency and effectiveness. In this paper we present a data collection framework that allows a human demonstrator to teleoperate or to visually guide a mobile robot for the required behaviors, while the sensory-motor examples are simultaneously gathered. In the teleoperation mode, the human demonstrator can teleoperate the robot through a GUI that consists of the velocity control and sensory-motor recording commands with the monitoring windows for sonar, laser and visual image. In the visual-guided mode, the human demonstrator uses a green can as the command stick that is tracked by a pan-tilt-zoom (PTZ) camera. The framework is implemented on a Peoplebot robot. Experiments show that both demonstration modes of the framework provide an user-friendly interface of data collection for the subsequent learning process of the robot.

    @inproceedings{lirolem10050,
               month = {November},
              author = {J. Li and B. Wang and Tom Duckett},
                note = {Conference Code:89095},
           booktitle = {IASTED International Conference on Robotics and Applications, RA 2010},
             address = {Cambridge, MA},
               title = {A data collection framework for learning from demonstration in mobile robotics},
           publisher = {IASTED},
                year = {2010},
               pages = {137--142},
            keywords = {ARRAY(0x7fdc781abdf8)},
                 url = {http://eprints.lincoln.ac.uk/10050/},
            abstract = {Robot learning from demonstration (LfD) requires data collection for mapping the sensory states to motion action, which plays a significant role in the learning efficiency and effectiveness. In this paper we present a data collection framework that allows a human demonstrator to teleoperate or to visually guide a mobile robot for the required behaviors, while the sensory-motor examples are simultaneously gathered. In the teleoperation mode, the human demonstrator can teleoperate the robot through a GUI that consists of the velocity control and sensory-motor recording commands with the monitoring windows for sonar, laser and visual image. In the visual-guided mode, the human demonstrator uses a green can as the command stick that is tracked by a pan-tilt-zoom (PTZ) camera. The framework is implemented on a Peoplebot robot. Experiments show that both demonstration modes of the framework provide an user-friendly interface of data collection for the subsequent learning process of the robot.}
    }
  • H. Meng, K. Appiah, S. Yue, A. Hunter, M. Hobden, N. Priestley, P. Hobden, and C. Pettit, “A modified model for the Lobula Giant Movement Detector and its FPGA implementation,” Computer vision and image understanding, vol. 114, iss. 11, pp. 1238-1247, 2010.
    [BibTeX] [Abstract] [EPrints]

    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector.

    @article{lirolem2315,
              volume = {114},
              number = {11},
               month = {November},
              author = {Hongying Meng and Kofi Appiah and Shigang Yue and Andrew Hunter and Mervyn Hobden and Nigel  Priestley and Peter Hobden and Cy Pettit},
               title = {A modified model for the Lobula Giant Movement Detector and its FPGA implementation},
           publisher = {Elsevier},
                year = {2010},
             journal = {Computer vision and image understanding},
               pages = {1238--1247},
            keywords = {ARRAY(0x7fdc78198de8)},
                 url = {http://eprints.lincoln.ac.uk/2315/},
            abstract = {The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in
    visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been
    simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector.}
    }
  • M. Shaker, T. Duckett, and S. Yue, “A vision-guided parallel parking system for a mobile robot using approximate policy iteration,” in 11th Conference Towards Autonomous Robotic Systems (TAROS’2010), 2010.
    [BibTeX] [Abstract] [EPrints]

    Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions. Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time. A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning).

    @inproceedings{lirolem3865,
           booktitle = {11th Conference Towards Autonomous Robotic Systems (TAROS'2010)},
               month = {September},
               title = {A vision-guided parallel parking system for a mobile robot using approximate policy iteration},
              author = {Marwan Shaker and Tom Duckett and Shigang Yue},
                year = {2010},
                note = {Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions.
    Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time.
    A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning).},
            keywords = {ARRAY(0x7fdc78051b90)},
                 url = {http://eprints.lincoln.ac.uk/3865/},
            abstract = {Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions.
    Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time.
    A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning).}
    }
  • M. Shaker, M. N. R. Smith, S. Yue, and T. Duckett, “Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning,” in International Symposium on Learning and Adaptive Behaviour in Robotics Systems (LAB-RS 2010), 2010.
    [BibTeX] [Abstract] [EPrints]

    Landing is one of the difficult challenges for an unmanned aerial vehicle (UAV). In this paper, we propose a vision-based landing approach for an autonomous UAV using reinforcement learning (RL). The autonomous UAV learns the landing skill from scratch by interacting with the environment. The reinforcement learning algorithm explored and extended in this study is Least-Squares Policy Iteration (LSPI) to gain a fast learning process and a smooth landing trajectory. The proposed approach has been tested with a simulated quadrocopter in an extended version of the USARSim Unified System for Automation and Robot Simulation) environment. Results showed that LSPI learned the landing skill very quickly, requiring less than 142 trials.

    @inproceedings{lirolem3867,
           booktitle = {International Symposium on Learning and Adaptive Behaviour in Robotics Systems (LAB-RS 2010)},
               month = {August},
               title = {Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning},
              author = {Marwan Shaker and Mark N. R. Smith and Shigang Yue and Tom Duckett},
                year = {2010},
                note = {Also: Emerging Security Technologies (EST), 2010 International Conference on },
            keywords = {ARRAY(0x7fdc78052ff8)},
                 url = {http://eprints.lincoln.ac.uk/3867/},
            abstract = {Landing is one of the difficult challenges for an unmanned
    aerial vehicle (UAV). In this paper, we propose a vision-based landing approach for an autonomous UAV using reinforcement learning (RL). The autonomous UAV learns the landing skill from scratch by interacting with the environment. The reinforcement learning algorithm explored and extended in this study is Least-Squares Policy Iteration (LSPI) to gain a fast learning process and a smooth landing trajectory. The proposed approach has been tested with a simulated quadrocopter in an extended version of the USARSim Unified System for Automation and Robot Simulation) environment. Results showed that LSPI learned the landing skill very quickly, requiring less than 142 trials.}
    }
  • O. Szymanezyk and G. Cielniak, “Group emotion modelling and the use of middleware for virtual crowds in video-games,” in The Thirty Sixth Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISBï??10), 2010.
    [BibTeX] [Abstract] [EPrints]

    In this paper we discuss the use of crowd simulation in video-games to augment their realism. Using previous works on emotion modelling and virtual crowds we define a game world in an urban context. To achieve that, we explore a biologically inspired human emotion model, investigate the formation of groups in crowds, and examine the use of physics middleware for crowds. Furthermore, we assess the realism and computational performance of the proposed approach. Our system runs at interactive frame-rate and can generate large crowds which demonstrate complex behaviour.

    @inproceedings{lirolem6295,
           booktitle = {The Thirty Sixth Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB{\"i}??10)},
               month = {March},
               title = {Group emotion modelling and the use of middleware for virtual crowds in video-games},
              author = {Oliver Szymanezyk and Grzegorz Cielniak},
           publisher = {AISB Daniela M. Romano and David C. Moffat},
                year = {2010},
                note = {In this paper we discuss the use of crowd
    simulation in video-games to augment their realism. Using
    previous works on emotion modelling and virtual crowds we
    define a game world in an urban context. To achieve that, we
    explore a biologically inspired human emotion model,
    investigate the formation of groups in crowds, and examine
    the use of physics middleware for crowds. Furthermore, we
    assess the realism and computational performance of the
    proposed approach. Our system runs at interactive frame-rate
    and can generate large crowds which demonstrate complex
    behaviour.},
            keywords = {ARRAY(0x7fdc781b4320)},
                 url = {http://eprints.lincoln.ac.uk/6295/},
            abstract = {In this paper we discuss the use of crowd
    simulation in video-games to augment their realism. Using
    previous works on emotion modelling and virtual crowds we
    define a game world in an urban context. To achieve that, we
    explore a biologically inspired human emotion model,
    investigate the formation of groups in crowds, and examine
    the use of physics middleware for crowds. Furthermore, we
    assess the realism and computational performance of the
    proposed approach. Our system runs at interactive frame-rate
    and can generate large crowds which demonstrate complex
    behaviour.}
    }
  • J. L. Wyatt, A. Aydemir, M. Brenner, M. Hanheide, N. Hawes, P. Jensfelt, M. Kristan, G. M. Kruijff, P. Lison, A. Pronobis, K. Sjoo, A. Vrecko, H. Zender, M. Zillich, and D. Skocaj, “Self-understanding and self-extension: a systems and representational approach,” Autonomous Mental Development, IEEE Transactions on, vol. 2, iss. 4, pp. 282-303, 2010.
    [BibTeX] [Abstract] [EPrints]

    There are many different approaches to building a system that can engage in autonomous mental development. In this paper we present an approach based on what we term em self-understanding, by which we mean the use of explicit representation of and reasoning about what a system does and doesn’t know, and how that understanding changes under action. We present a coherent architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, what we term em self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a motivational and planning system for setting and achieving learning goals

    @article{lirolem6699,
              volume = {2},
              number = {4},
               month = {December},
              author = {Jeremy L. Wyatt and Alper Aydemir and Michael Brenner and Marc Hanheide and Nick Hawes and Patric Jensfelt and Matej Kristan and Geert-Jan M. Kruijff and Pierre Lison and Andrzej Pronobis and Kristoffer Sjoo and Alen Vrecko and Hendrik Zender and Michael Zillich and Danijel Skocaj},
                note = {There are many different approaches to building a system that can engage in autonomous mental development. In this paper we present an approach based on what we term em self-understanding, by which we mean the use of explicit representation of and reasoning about what a system does and doesn't know, and how that understanding changes under action. We present a coherent architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, what we term em self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a motivational and planning system for setting and achieving learning goals},
               title = {Self-understanding and self-extension: a systems and representational approach},
           publisher = {IEEE},
                year = {2010},
             journal = {Autonomous Mental Development, IEEE Transactions on},
               pages = {282--303},
            keywords = {ARRAY(0x7fdc781a67d0)},
                 url = {http://eprints.lincoln.ac.uk/6699/},
            abstract = {There are many different approaches to building a system that can engage in autonomous mental development. In this paper we present an approach based on what we term em self-understanding, by which we mean the use of explicit representation of and reasoning about what a system does and doesn't know, and how that understanding changes under action. We present a coherent architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, what we term em self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a motivational and planning system for setting and achieving learning goals}
    }
  • F. Yuan, L. Twardon, and M. Hanheide, “Dynamic path planning adopting human navigation strategies for a domestic mobile robot,” in Conference of 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010, Taipei, 2010, pp. 3275-3281.
    [BibTeX] [Abstract] [EPrints]

    Mobile robots that are employed in people’s homes need to safely navigate their environment. And natural human-inhabited environments still pose significant challenges for robots despite the impressive progress that has been achieved in the field of path planning and obstacle avoidance. These challenges mostly arise from the fact that (i) the perceptual abilities of a robot are limited, thus sometimes impeding its ability to see relevant obstacles (e.g. transparent objects), and (ii) the environment is highly dynamic being populated by humans. In this contribution we are making a case for an integrated solution to these challenges that builds upon the analysis and use of implicit human knowledge in path planning and a cascade of replanning approaches. We combine state of the art path planning and obstacle avoidance algorithms with the knowledge about how humans navigate in their very own environment. The approach results in a more robust and predictable navigation ability for domestic robots as is demonstrated in a number of experimental runs. Â\copyright2010 IEEE.

    @inproceedings{lirolem8320,
               month = {October},
              author = {F. Yuan and L. Twardon and Marc Hanheide},
                note = {Conference Code: 83389},
           booktitle = {Conference of 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010},
               title = {Dynamic path planning adopting human navigation strategies for a domestic mobile robot},
             address = {Taipei},
           publisher = {IEEE},
                year = {2010},
             journal = {IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings},
               pages = {3275--3281},
            keywords = {ARRAY(0x7fdc781c1890)},
                 url = {http://eprints.lincoln.ac.uk/8320/},
            abstract = {Mobile robots that are employed in people's homes need to safely navigate their environment. And natural human-inhabited environments still pose significant challenges for robots despite the impressive progress that has been achieved in the field of path planning and obstacle avoidance. These challenges mostly arise from the fact that (i) the perceptual abilities of a robot are limited, thus sometimes impeding its ability to see relevant obstacles (e.g. transparent objects), and (ii) the environment is highly dynamic being populated by humans. In this contribution we are making a case for an integrated solution to these challenges that builds upon the analysis and use of implicit human knowledge in path planning and a cascade of replanning approaches. We combine state of the art path planning and obstacle avoidance algorithms with the knowledge about how humans navigate in their very own environment. The approach results in a more robust and predictable navigation ability for domestic robots as is demonstrated in a number of experimental runs. {\^A}{\copyright}2010 IEEE.}
    }
  • S. Yue, R. D. Santer, Y. Yamawaki, and C. F. Rind, “Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated,” Autonomous Robots, vol. 28, iss. 2, pp. 151-167, 2010.
    [BibTeX] [Abstract] [EPrints]

    Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using ?randomised winner-take-all? or ?steering wheel? algorithms for LGMD model integration, the khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research.

    @article{lirolem2669,
              volume = {28},
              number = {2},
               month = {February},
              author = {Shigang Yue and Roger D. Santer and Yoshifumi Yamawaki and F. Claire Rind},
               title = {Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated},
           publisher = {Springer Verlag},
                year = {2010},
             journal = {Autonomous Robots},
               pages = {151--167},
            keywords = {ARRAY(0x7fdc7808eed0)},
                 url = {http://eprints.lincoln.ac.uk/2669/},
            abstract = {Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to
    the image of an approaching object. These neurons are called the lobula giant movement
    detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the
    development of an LGMD model for use as an artificial collision detector in robotic applications.
    To date, robots have been equipped with only a single, central artificial LGMD sensor, and this
    triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly,
    for a robot to behave autonomously, it must react differently to stimuli approaching from
    different directions. In this study, we implement a bilateral pair of LGMD models in Khepera
    robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD
    models using methodologies inspired by research on escape direction control in cockroaches.
    Using ?randomised winner-take-all? or ?steering wheel? algorithms for LGMD model integration,
    the khepera robots could escape an approaching threat in real time and with a similar
    distribution of escape directions as real locusts. We also found that by optimising these
    algorithms, we could use them to integrate the left and right DCMD responses of real jumping
    locusts offline and reproduce the actual escape directions that the locusts took in a particular
    trial. Our results significantly advance the development of an artificial collision detection and
    evasion system based on the locust LGMD by allowing it reactive control over robot behaviour.
    The success of this approach may also indicate some important areas to be pursued in future
    biological research.}
    }

2009

  • K. Appiah, A. Hunter, H. Meng, S. Yue, M. Hobden, N. Priestley, P. Hobden, and C. Pettit, “A binary self-organizing map and its FPGA implementation,” in IEEE International Joint Conference on Neural Networks, 2009.
    [BibTeX] [Abstract] [EPrints]

    A binary Self Organizing Map (SOM) has been designed and implemented on a Field Programmable Gate Array (FPGA) chip. A novel learning algorithm which takes binary inputs and maintains tri-state weights is presented. The binary SOM has the capability of recognizing binary input sequences after training. A novel tri-state rule is used in updating the network weights during the training phase. The rule implementation is highly suited to the FPGA architecture, and allows extremely rapid training. This architecture may be used in real-time for fast pattern clustering and classification of the binary features.

    @inproceedings{lirolem1852,
           booktitle = {IEEE International Joint Conference on Neural Networks},
               month = {June},
               title = {A binary self-organizing map and its FPGA implementation},
              author = {Kofi Appiah and Andrew Hunter and Hongying Meng and Shigang Yue and Mervyn Hobden and Nigel Priestley and Peter Hobden and Cy Pettit},
                year = {2009},
            keywords = {ARRAY(0x7fdc781bd9f8)},
                 url = {http://eprints.lincoln.ac.uk/1852/},
            abstract = {A binary Self Organizing Map (SOM) has been designed and
    implemented on a Field Programmable Gate Array (FPGA) chip. A novel learning algorithm which takes binary inputs and maintains tri-state weights is presented. The binary SOM has the capability of recognizing binary input sequences after training. A novel tri-state rule is used in updating the network weights during the training phase. The rule implementation is highly suited to the FPGA architecture, and allows extremely rapid training. This architecture may be used in real-time for fast pattern clustering and classification of the binary features.}
    }
  • F. Arvin and S. Doraisamy, “A real-time note transcription technique using static and dynamic window sizes,” in 9 International Conference on Signal Acquisition and Processing, 2009, pp. 30-33.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a real-time signal processing technique using a hardware interface based on the microcontroller to process audio music signals to standard MIDI data. A technique for transcribing music signals by extracting note parameters is described. Two different approaches using static and dynamic window sizes to convert the voice samples for real-time processing without complex calculations are proposed. The transcribed data generated shows the feasibility of using microcontrollers for real-time MIDI generation hardware interface.

    @inproceedings{lirolem11358,
           booktitle = {9 International Conference on Signal Acquisition and Processing},
               month = {April},
               title = {A real-time note transcription technique using static and dynamic window sizes},
              author = {Farshad Arvin and Shyamala Doraisamy},
           publisher = {IEEE},
                year = {2009},
               pages = {30--33},
            keywords = {ARRAY(0x7fdc781b1aa0)},
                 url = {http://eprints.lincoln.ac.uk/11358/},
            abstract = {This paper presents a real-time signal processing technique using a hardware interface based on the microcontroller to process audio music signals to standard MIDI data. A technique for transcribing music signals by extracting note parameters is described. Two different approaches using static and dynamic window sizes to convert the voice samples for real-time processing without complex calculations are proposed. The transcribed data generated shows the feasibility of using microcontrollers for real-time MIDI generation hardware interface.}
    }
  • F. Arvin, K. Samsudin, and A. M. Nasseri, “Design of a differential-drive wheeled robot controller with pulse-width modulation,” in 2009 Conference on Innovative Technologies in Intelligent Systems and Industrial Applications, 2009, pp. 143-147.
    [BibTeX] [Abstract] [EPrints]

    This paper presents mobile robots motion control technique based on pulse-width modulation (PWM). This technique is employed on AMiR which is an autonomous miniature robot for swarm robotic platform. That uses differential drive with a caster wheel configuration. Robot’s motors enable to work with different speed in different direction, forward and reverse. A microcontroller as the main processor is deployed to generate motor control pulses and manage duty-cycle of PWM signals. Two methods in robot trajectory control which are rotation and straight movement are described in this paper. Time estimation and also speed selection calculations illustrate the feasibility of this technique to be used in mobile robot motion control problem.

    @inproceedings{lirolem7327,
           booktitle = {2009 Conference on Innovative Technologies in Intelligent Systems and Industrial Applications},
               month = {July},
               title = {Design of a differential-drive wheeled robot controller with pulse-width modulation},
              author = {Farshad Arvin and Khairulmizam Samsudin and M. Ali Nasseri},
           publisher = {IEEE},
                year = {2009},
               pages = {143--147},
            keywords = {ARRAY(0x7fdc7818d3d0)},
                 url = {http://eprints.lincoln.ac.uk/7327/},
            abstract = {This paper presents mobile robots motion control technique based on pulse-width modulation (PWM). This technique is employed on AMiR which is an autonomous miniature robot for swarm robotic platform. That uses differential drive with a caster wheel configuration. Robot's motors enable to work with different speed in different direction, forward and reverse. A microcontroller as the main processor is deployed to generate motor control pulses and manage duty-cycle of PWM signals. Two methods in robot trajectory control which are rotation and straight movement are described in this paper. Time estimation and also speed selection calculations illustrate the feasibility of this technique to be used in mobile robot motion control problem.}
    }
  • F. Arvin, K. Samsudin, and A. R. Ramli, “A short-range infrared communication for swarm mobile robots,” in International Conference on Signal Processing Systems, 2009, pp. 454-458.
    [BibTeX] [Abstract] [EPrints]

    This paper presents another short-range communication technique suitable for swarm mobile robots application. Infrared is used for transmitting and receiving data packets and obstacle detection. The infrared communication system is used for an autonomous mobile robot (UPM-AMR) that will be used as a low-cost platform for robotics research. A pulse-code modulation (PCM) digital scheme is used for transmitting data. The reflected infrared signal is also used for distance estimation for obstacle avoidance. Analysis of robot’s behaviors shows the feasibility of using infrared signals to obtain a reliable local communication between swarm mobile robots.

    @inproceedings{lirolem7326,
           booktitle = {International Conference on Signal Processing Systems},
               month = {May},
               title = {A short-range infrared communication for swarm mobile robots},
              author = {Farshad Arvin and Khairulmizam Samsudin and Abdul Rahman Ramli},
           publisher = {IEEE},
                year = {2009},
               pages = {454--458},
            keywords = {ARRAY(0x7fdc781c6ba8)},
                 url = {http://eprints.lincoln.ac.uk/7326/},
            abstract = {This paper presents another short-range communication technique suitable for swarm mobile robots application. Infrared is used for transmitting and receiving data packets and obstacle detection. The infrared communication system is used for an autonomous mobile robot (UPM-AMR) that will be used as a low-cost platform for robotics research. A pulse-code modulation (PCM) digital scheme is used for transmitting data. The reflected infrared signal is also used for distance estimation for obstacle avoidance. Analysis of robot's behaviors shows the feasibility of using infrared signals to obtain a reliable local communication between swarm mobile robots.}
    }
  • F. Arvin, K. Samsudin, and A. R. Ramli, “Swarm robots long term autonomy using moveable charger ,” in International Conference on Future Computer and Communication, 2009, pp. 127-130.
    [BibTeX] [Abstract] [EPrints]

    This paper proposes an alternative docking charger station for mobile robots. One crucial task in swarm robots scenario is to find low battery robots and recover them. Overcoming this problem requires a versatile mobile charging station that is able to actively locate and charge inactive mobile robots unlike the conventional stationary docking stations. The mobile charger robot should perform independent tasks other than the swarm robots global task. It uses similar communication method with other swarm robots, however new preprogrammed instructions are required to facilitate battery charging task for swarm robots.

    @inproceedings{lirolem11357,
           booktitle = {International Conference on Future Computer and Communication},
               month = {April},
               title = {Swarm robots long term autonomy using moveable charger },
              author = {Farshad Arvin and Khairulmizam Samsudin and Abdul Rahman Ramli},
           publisher = {IEEE},
                year = {2009},
               pages = {127--130},
            keywords = {ARRAY(0x7fdc78121d40)},
                 url = {http://eprints.lincoln.ac.uk/11357/},
            abstract = {This paper proposes an alternative docking charger station for mobile robots. One crucial task in swarm robots scenario is to find low battery robots and recover them. Overcoming this problem requires a versatile mobile charging station that is able to actively locate and charge inactive mobile robots unlike the conventional stationary docking stations. The mobile charger robot should perform independent tasks other than the swarm robots global task. It uses similar communication method with other swarm robots, however new preprogrammed instructions are required to facilitate battery charging task for swarm robots.}
    }
  • F. Arvin, K. Samsudin, and A. R. Ramli, “Development of a miniature robot for swarm robotic application,” International Journal of Computer and Electrical Engineering, vol. 1, iss. 4, pp. 436-442, 2009.
    [BibTeX] [Abstract] [EPrints]

    Biological swarm is a fascinating behavior of nature that has been successfully applied to solve human problem especially for robotics application. The high economical cost and large area required to execute swarm robotics scenarios does not permit experimentation with real robot. Model and simulation of the mass number of these robots are extremely complex and often inaccurate. This paper describes the design decision and presents the development of an autonomous miniature mobile-robot (AMiR) for swarm robotics research and education. The large number of robot in these systems allows designing an individual AMiR unit with simple perception and mobile abilities. Hence a large number of robots can be easily and economically feasible to be replicated. AMiR has been designed as a complete platform with supporting software development tools for robotics education and researches in the Department of Computer and Communication Systems Engineering, UPM. The experimental results demonstrate the feasibility of using this robot to implement swarm robotic applications.

    @article{lirolem5797,
              volume = {1},
              number = {4},
               month = {August},
              author = {Farshad Arvin and Khairulmizam Samsudin and Abdul Rahman Ramli},
               title = {Development of a miniature robot for swarm robotic application},
           publisher = {International Association of Computer Science and Information Technology Press},
                year = {2009},
             journal = {International Journal of Computer and Electrical Engineering},
               pages = {436--442},
            keywords = {ARRAY(0x7fdc78176b00)},
                 url = {http://eprints.lincoln.ac.uk/5797/},
            abstract = {Biological swarm is a fascinating behavior of nature that has been successfully applied to solve human problem especially for robotics application. The high economical cost and large area required to execute swarm robotics scenarios does not permit experimentation with real robot. Model and simulation of the mass number of these robots are extremely complex and often inaccurate. This paper describes the design decision and presents the development of an autonomous miniature mobile-robot (AMiR) for swarm robotics research and education. The large number of robot in these systems allows designing an individual AMiR unit with simple perception and mobile abilities. Hence a large number of robots can be easily and economically feasible to be replicated. AMiR has been designed as a complete platform with supporting software development tools for robotics education and researches in the Department of Computer and Communication Systems Engineering, UPM. The experimental results demonstrate the feasibility of using this robot to implement swarm robotic applications.}
    }
  • M. Barnes, T. Duckett, and G. Cielniak, “Boosting minimalist classifiers for blemish detection in potatoes,” in Image and Vision Computing New Zealand, 2009, pp. 397-402.
    [BibTeX] [Abstract] [EPrints]

    This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and nonblemishes. With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build ?minimalist? classifiers that optimise detection performance at low computational cost. In experiments, minimalist blemish detectors were trained for both white and red potato varieties, achieving 89.6\% and 89.5\% accuracy respectively.

    @inproceedings{lirolem2134,
           booktitle = {Image and Vision Computing New Zealand},
               month = {November},
               title = {Boosting minimalist classifiers for blemish detection in potatoes},
              author = {Michael Barnes and Tom Duckett and Grzegorz Cielniak},
                year = {2009},
               pages = {397--402},
                note = {This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and nonblemishes.
    With this approach, different features can be selected
    for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build ?minimalist? classifiers that optimise detection performance at low computational cost. In experiments, minimalist blemish detectors were trained for both white and red potato varieties, achieving 89.6\% and 89.5\% accuracy respectively.},
            keywords = {ARRAY(0x7fdc781b42d8)},
                 url = {http://eprints.lincoln.ac.uk/2134/},
            abstract = {This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and nonblemishes.
    With this approach, different features can be selected
    for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build ?minimalist? classifiers that optimise detection performance at low computational cost. In experiments, minimalist blemish detectors were trained for both white and red potato varieties, achieving 89.6\% and 89.5\% accuracy respectively.}
    }
  • N. Bellotto, E. Sommerlade, B. Benfold, C. Bibby, I. Reid, D. Roth, C. Fenandez, L. V. Gool, and J. Gonzales, “A distributed camera system for multi-resolution surveillance,” in 3rd ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC), 2009.
    [BibTeX] [Abstract] [EPrints]

    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.

    @inproceedings{lirolem2097,
           booktitle = {3rd ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)},
               title = {A distributed camera system for multi-resolution surveillance},
              author = {Nicola Bellotto and Eric Sommerlade and Ben Benfold and Charles Bibby and Ian Reid and Daniel Roth and Carles Fenandez and Luc Van Gool and Jordi Gonzales},
                year = {2009},
                note = {We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor.
    Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database.
    Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table.
    We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating
    under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.},
            keywords = {ARRAY(0x7fdc7802b940)},
                 url = {http://eprints.lincoln.ac.uk/2097/},
            abstract = {We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor.
    Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database.
    Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table.
    We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating
    under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.}
    }
  • N. Bellotto and H. Hu, “Multisensor-based human detection and tracking for mobile service robots,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 39, iss. 1, pp. 167-181, 2009.
    [BibTeX] [Abstract] [EPrints]

    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot’s camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

    @article{lirolem2096,
              volume = {39},
              number = {1},
               month = {February},
              author = {Nicola Bellotto and Huosheng Hu},
               title = {Multisensor-based human detection and tracking for mobile service robots},
           publisher = {IEEE Systems, Man and Cybernetics Society},
                year = {2009},
             journal = {IEEE Transactions on Systems, Man and Cybernetics, Part B},
               pages = {167--181},
            keywords = {ARRAY(0x7fdc78062970)},
                 url = {http://eprints.lincoln.ac.uk/2096/},
            abstract = {The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms.
    Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.}
    }
  • P. Biber and T. Duckett, “Experimental analysis of sample-based maps for long-term SLAM,” International Journal of Robotics Research, vol. 28, iss. 1, pp. 20-33, 2009.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a system for long-term SLAM (simultaneous localization and mapping) by mobile service robots and its experimental evaluation in a real dynamic environment. To deal with the stability-plasticity dilemma (the trade-off between adaptation to new patterns and preservation of old patterns), the environment is represented at multiple timescales simultaneously (5 in our experiments). A sample-based representation is proposed, where older memories fade at different rates depending on the timescale, and robust statistics are used to interpret the samples. The dynamics of this representation are analysed in a five week experiment, measuring the relative influence of short- and long-term memories over time, and further demonstrating the robustness of the approach.

    @article{lirolem2095,
              volume = {28},
              number = {1},
               month = {August},
              author = {Peter Biber and Tom Duckett},
               title = {Experimental analysis of sample-based maps for long-term SLAM},
           publisher = {SAGE},
                year = {2009},
             journal = {International Journal of Robotics Research},
               pages = {20--33},
            keywords = {ARRAY(0x7fdc7812ad80)},
                 url = {http://eprints.lincoln.ac.uk/2095/},
            abstract = {This paper presents a system for long-term SLAM (simultaneous localization and mapping) by mobile service robots and its experimental evaluation in a real dynamic environment. To deal with the stability-plasticity dilemma (the trade-off between adaptation to new patterns and preservation of old patterns), the environment is represented at multiple timescales simultaneously (5 in our experiments). A sample-based representation is
    proposed, where older memories fade at different rates depending on the timescale, and robust statistics are used to interpret the samples. The dynamics of this representation are analysed in a five week experiment, measuring the relative influence of short- and long-term memories over time, and further demonstrating the robustness of the approach.}
    }
  • H. Cuayáhuitl, “Hierarchical reinforcement learning for spoken dialogue systems,” PhD Thesis, 2009.
    [BibTeX] [Abstract] [EPrints]

    This thesis focuses on the problem of scalable optimization of dialogue behaviour in speech-based conversational systems using reinforcement learning. Most previous investigations in dialogue strategy learning have proposed flat reinforcement learning methods, which are more suitable for small-scale spoken dialogue systems. This research formulates the problem in terms of Semi-Markov Decision Processes (SMDPs), and proposes two hierarchical reinforcement learning methods to optimize sub-dialogues rather than full dialogues. The first method uses a hierarchy of SMDPs, where every SMDP ignores irrelevant state variables and actions in order to optimize a sub-dialogue. The second method extends the first one by constraining every SMDP in the hierarchy with prior expert knowledge. The latter method proposes a learning algorithm called ‘HAM+HSMQ-Learning’, which combines two existing algorithms in the literature of hierarchical reinforcement learning. Whilst the first method generates fully-learnt behaviour, the second one generates semi-learnt behaviour. In addition, this research proposes a heuristic dialogue simulation environment for automatic dialogue strategy learning. Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: First, both methods scale well at the cost of near-optimal solutions, resulting in slightly longer dialogues than the optimal solutions. Second, dialogue strategies learnt with coherent user behaviour and conservative recognition error rates can outperform a reasonable hand-coded strategy. Third, semi-learnt dialogue behaviours are a better alternative (because of their higher overall performance) than hand-coded or fully-learnt dialogue behaviours. Last, hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of adaptive behaviours in larger-scale spoken dialogue systems. This research makes the following contributions to spoken dialogue systems which learn their dialogue behaviour. First, the Semi-Markov Decision Process (SMDP) model was proposed to learn spoken dialogue strategies in a scalable way. Second, the concept of ‘partially specified dialogue strategies’ was proposed for integrating simultaneously hand-coded and learnt spoken dialogue behaviours into a single learning framework. Third, an evaluation with real users of hierarchical reinforcement learning dialogue agents was essential to validate their effectiveness in a realistic environment.

    @phdthesis{lirolem22207,
               month = {March},
               title = {Hierarchical reinforcement learning for spoken dialogue systems},
              school = {The University of Edinburgh},
              author = {Heriberto Cuay{\'a}huitl},
                year = {2009},
            keywords = {ARRAY(0x7fdc7818a1f0)},
                 url = {http://eprints.lincoln.ac.uk/22207/},
            abstract = {This thesis focuses on the problem of scalable optimization of dialogue behaviour in speech-based conversational systems using reinforcement learning. Most previous investigations in dialogue strategy learning have proposed flat reinforcement learning methods, which are more suitable for small-scale spoken dialogue systems. This research formulates the problem in terms of Semi-Markov Decision Processes (SMDPs), and proposes two hierarchical reinforcement learning methods to optimize sub-dialogues rather than full dialogues. The first method uses a hierarchy of SMDPs, where every SMDP ignores irrelevant state variables and actions in order to optimize a sub-dialogue. The second method extends the first one by constraining every SMDP in the hierarchy with prior expert knowledge. The latter method proposes a learning algorithm called 'HAM+HSMQ-Learning', which combines two existing algorithms in the literature of hierarchical reinforcement learning. Whilst the first method generates fully-learnt behaviour, the second one generates semi-learnt behaviour. In addition, this research proposes a heuristic dialogue simulation environment for automatic dialogue strategy learning. Experiments were performed on simulated and real environments based on a travel planning spoken dialogue system. Experimental results provided evidence to support the following claims: First, both methods scale well at the cost of near-optimal solutions, resulting in slightly longer dialogues than the optimal solutions. Second, dialogue strategies learnt with coherent user behaviour and conservative recognition error rates can outperform a reasonable hand-coded strategy. Third, semi-learnt dialogue behaviours are a better alternative (because of their higher overall performance) than hand-coded or fully-learnt dialogue behaviours. Last, hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of adaptive behaviours in larger-scale spoken dialogue systems. This research makes the following contributions to spoken dialogue systems which learn their dialogue behaviour. First, the Semi-Markov Decision Process (SMDP) model was proposed to learn spoken dialogue strategies in a scalable way. Second, the concept of 'partially specified dialogue strategies' was proposed for integrating simultaneously hand-coded and learnt spoken dialogue behaviours into a single learning framework. Third, an evaluation with real users of hierarchical reinforcement learning dialogue agents was essential to validate their effectiveness in a realistic environment.}
    }
  • F. Dayoub, T. Duckett, and G. Cielniak, “An adaptive spherical view representation for navigation in changing environments,” in 4th European Conference on Mobile Robots ECMR-09, 2009.
    [BibTeX] [Abstract] [EPrints]

    Real-world environments such as houses and offices change over time, meaning that a mobile robot?s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.

    @inproceedings{lirolem1960,
           booktitle = {4th European Conference on Mobile Robots ECMR-09},
               month = {September},
               title = {An adaptive spherical view representation for navigation in changing environments},
              author = {Feras Dayoub and Tom Duckett and Grzegorz Cielniak},
                year = {2009},
                note = {Real-world environments such as houses and offices change over time, meaning that a mobile robot?s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.},
             journal = {European Conference on Mobile Robots - ECMR 2009},
            keywords = {ARRAY(0x7fdc781a1878)},
                 url = {http://eprints.lincoln.ac.uk/1960/},
            abstract = {Real-world environments such as houses and offices change over time, meaning that a mobile robot?s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment.}
    }
  • A. Dierker, C. Mertes, T. Hermann, M. Hanheide, and G. Sagerer, “Mediated attention with multimodal augmented reality,” in International Conference on Multimodal interfaces – ICMI-MLMI ’09, 2009.
    [BibTeX] [Abstract] [EPrints]

    We present an Augmented Reality (AR) system to support collaborative tasks in a shared real-world interaction space by facilitating joint attention. The users are assisted by information about their interaction partner’s field of view both visually and acoustically. In our study, the audiovisual improvements are compared with an AR system without these support mechanisms in terms of the participants’ reaction times and error rates. The participants performed a simple object-choice task we call the gaze game to ensure controlled experimental conditions. Additionally, we asked the subjects to fill in a questionnaire to gain subjective feedback from them. We were able to show an improvement for both dependent variables as well as positive feedback for the visual augmentation in the questionnaire.

    @inproceedings{lirolem6922,
           booktitle = {International Conference on Multimodal interfaces - ICMI-MLMI '09},
              editor = {B. Gottfried and H. Aghajan},
               month = {November},
               title = {Mediated attention with multimodal augmented reality},
              author = {Angelika Dierker and Christian Mertes and Thomas Hermann and Marc Hanheide and Gerhard Sagerer},
                year = {2009},
            keywords = {ARRAY(0x7fdc7808b318)},
                 url = {http://eprints.lincoln.ac.uk/6922/},
            abstract = {We present an Augmented Reality (AR) system to support
    collaborative tasks in a shared real-world interaction space
    by facilitating joint attention. The users are assisted by information about their interaction partner's field of view both visually and acoustically. In our study, the audiovisual improvements are compared with an AR system without these support mechanisms in terms of the participants' reaction times and error rates. The participants performed a simple object-choice task we call the gaze game to ensure controlled experimental conditions. Additionally, we asked the subjects to fill in a questionnaire to gain subjective feedback from them. We were able to show an improvement for both dependent variables as well as positive feedback for the visual augmentation in the questionnaire.}
    }
  • C. Lang, M. Hanheide, M. Lohse, H. Wersing, and G. Sagerer, “Feedback interpretation based on facial expressions in human-robot interaction,” in The 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009, pp. 189-194.
    [BibTeX] [Abstract] [EPrints]

    In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects’ names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.

    @inproceedings{lirolem6919,
               month = {September},
              author = {Christian Lang and Marc Hanheide and Manja Lohse and Heiko Wersing and Gerhard Sagerer},
                note = {In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects' names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.},
           booktitle = {The 18th IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Feedback interpretation based on facial expressions in human-robot interaction},
           publisher = {IEEE},
                year = {2009},
               pages = {189--194},
            keywords = {ARRAY(0x7fdc7811e038)},
                 url = {http://eprints.lincoln.ac.uk/6919/},
            abstract = {In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects' names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.}
    }
  • M. Lohse, M. Hanheide, K. Pitsch, K. J. Rohlfing, and G. Sagerer, “Improving HRI design by applying systemic interaction analysis (SInA),” Interaction Studies, vol. 10, iss. 3, pp. 298-303, 2009.
    [BibTeX] [Abstract] [EPrints]

    Social robots are designed to interact with humans. That is why they need interaction models that take social behaviors into account. These usually influence many of a robot’s abilities simultaneously. Hence, when designing robots that users will want to interact with, all components need to be tested in the system context, with real users and real tasks in real interactions. This requires methods that link the analysis of the robot’s internal computations within and between components (system level) with the interplay between robot and user (interaction level). This article presents Systemic Interaction Analysis (SInA) as an integrated method to (a) derive prototypical courses of interaction based on system and interaction level, (b) identify deviations from these, (c) infer the causes of deviations by analyzing the system’s operational sequences, and (d) improve the robot iteratively by adjusting models and implementations.

    @article{lirolem6704,
              volume = {10},
              number = {3},
               month = {October},
              author = {Manja Lohse and Marc Hanheide and Karola Pitsch and Katharina J. Rohlfing and Gerhard Sagerer},
                note = {.},
               title = {Improving HRI design by applying systemic interaction analysis (SInA)},
           publisher = {John Benjamins Publishing },
                year = {2009},
             journal = {Interaction Studies},
               pages = {298--303},
            keywords = {ARRAY(0x7fdc7808f0b0)},
                 url = {http://eprints.lincoln.ac.uk/6704/},
            abstract = {Social robots are designed to interact with humans. That is why they need interaction models that take social behaviors into account. These usually influence many of a robot's abilities simultaneously. Hence, when designing robots that users will want to interact with, all components need to be tested in the system context, with real users and real tasks in real interactions. This requires methods that link the analysis of the robot's internal computations within and between components (system level) with the interplay between robot and user (interaction level). This article presents Systemic Interaction Analysis (SInA) as an integrated method to (a) derive prototypical courses of interaction based on system and interaction level, (b) identify deviations from these, (c) infer the causes of deviations by analyzing the system's operational sequences, and (d) improve the robot iteratively by adjusting models and implementations.}
    }
  • M. Lohse, M. Hanheide, K. J. Rohlfing, and G. Sagerer, “Systemic interaction analysis (SInA) in HRI ,” in 4th ACM/IEEE international conference on Human robot interaction – HRI ’09, 2009, pp. 93-100.
    [BibTeX] [Abstract] [EPrints]

    Recent developments in robotics enable advanced human-robot interaction. Especially interactions of novice users with robots are often unpredictable and, therefore, demand for novel methods for the analysis of the interaction in systemic ways. We propose Systemic Interaction Analysis (SInA) as a method to jointly analyze system level and interaction level in an integrated manner using one tool. The approach allows us to trace back patterns that deviate from prototypical interaction sequences to the distinct system components of our autonomous robot. In this paper, we exemplarily apply the method to the analysis of the follow behavior of our domestic robot BIRON. The analysis is the basis to achieve our goal of improving human-robot interaction iteratively.

    @inproceedings{lirolem6926,
               month = {March},
              author = {Manja Lohse and Marc Hanheide and Katharina J. Rohlfing and Gerhard Sagerer},
                note = {Recent developments in robotics enable advanced human-robot interaction. Especially interactions of novice users with robots are often unpredictable and, therefore, demand for novel methods for the analysis of the interaction in systemic ways. We propose Systemic Interaction Analysis (SInA) as a method to jointly analyze system level and interaction level in an integrated manner using one tool. The approach allows us to trace back patterns that deviate from prototypical interaction sequences to the distinct system components of our autonomous robot. In this paper, we exemplarily apply the method to the analysis of the follow behavior of our domestic robot BIRON. The analysis is the basis to achieve our goal of improving human-robot interaction iteratively.},
           booktitle = {4th ACM/IEEE international conference on Human robot interaction - HRI '09},
              editor = {B. Gottfried and H. Aghajan},
               title = {Systemic interaction analysis (SInA) in HRI
    },
           publisher = {ACM / IEEE},
                year = {2009},
               pages = {93--100},
            keywords = {ARRAY(0x7fdc7819e650)},
                 url = {http://eprints.lincoln.ac.uk/6926/},
            abstract = {Recent developments in robotics enable advanced human-robot interaction. Especially interactions of novice users with robots are often unpredictable and, therefore, demand for novel methods for the analysis of the interaction in systemic ways. We propose Systemic Interaction Analysis (SInA) as a method to jointly analyze system level and interaction level in an integrated manner using one tool. The approach allows us to trace back patterns that deviate from prototypical interaction sequences to the distinct system components of our autonomous robot. In this paper, we exemplarily apply the method to the analysis of the follow behavior of our domestic robot BIRON. The analysis is the basis to achieve our goal of improving human-robot interaction iteratively.}
    }
  • M. Mangan and B. Webb, “Modelling place memory in crickets,” Biological Cybernetics, vol. 101, pp. 307-323, 2009.
    [BibTeX] [Abstract] [EPrints]

    Insects can remember and return to a place of interest using the surrounding visual cues. In previous experiments, we showed that crickets could home to an invisible cool spot in a hot environment. They did so most effectively with a natural scene surround, though they were also able to home with distinct landmarks or blank walls. Homing was not successful, however, when visual cues were removed through a dark control. Here, we compare six different models of visual homing using the same visual environments. Only models deemed biologically plausible for use by insects were implemented. The average landmark vector model and first order differential optic flow are unable to home better than chance in at least one of the visual environments. Second order differential optic flow and GradDescent on image differences can home better than chance in all visual environments, and best in the natural scene environment, but do not quantitatively match the distributions of the cricket data. Two models–centre of mass average landmark vector and RunDown on image differences–could produce the same pattern of results as observed for crickets. Both the models performed best using simple binary images and were robust to changes in resolution and image smoothing

    @article{lirolem24846,
              volume = {101},
               month = {October},
              author = {Michael Mangan and Barbara Webb},
               title = {Modelling place memory in crickets},
           publisher = {Springer},
             journal = {Biological Cybernetics},
               pages = {307--323},
                year = {2009},
            keywords = {ARRAY(0x7fdc781aa9f8)},
                 url = {http://eprints.lincoln.ac.uk/24846/},
            abstract = {Insects can remember and return to a place of interest using the surrounding visual cues. In previous experiments, we showed that crickets could home to an invisible cool spot in a hot environment. They did so most effectively with a natural scene surround, though they were also able to home with distinct landmarks or blank walls. Homing was not successful, however, when visual cues were removed through a dark control. Here, we compare six different models of visual homing using the same visual environments. Only models deemed biologically plausible for use by insects were implemented. The average landmark vector model and first order differential optic flow are unable to home better than chance in at least one of the visual environments. Second order differential optic flow and GradDescent on image differences can home better than chance in all visual environments, and best in the natural scene environment, but do not quantitatively match the distributions of the cricket data. Two models{--}centre of mass average landmark vector and RunDown on image differences{--}could produce the same pattern of results as observed for crickets. Both the models performed best using simple binary images and were robust to changes in resolution and image smoothing}
    }
  • H. Meng, K. Appiah, A. Hunter, S. Yue, M. Hobden, N. Priestley, P. Hobden, and C. Pettit, “A modified sparse distributed memory model for extracting clean patterns from noisy inputs,” in IEEE International Joint Conference on Neural Networks (IJCNN 2009), Atlanta, USA., 2009.
    [BibTeX] [Abstract] [EPrints]

    Abstract–The Sparse Distributed Memory (SDM) proposed by Kanerva provides a simple model for human long-term memory, with a strong underlying mathematical theory. However, there are problematic features in the original SDM model that affect its efficiency and performance in real world applications and for hardware implementation. In this paper, we propose modifications to the SDM model that improve its efficiency and performance in pattern recall. First, the address matrix is built using training samples rather than random binary sequences. This improves the recall performance significantly. Second, the content matrix is modified using a simple tri-state logic rule. This reduces the storage requirements of the SDM and simplifies the implementation logic, making it suitable for hardware implementation. The modified model has been tested using pattern recall experiments. It is found that the modified model can recall clean patterns very well from noisy inputs.

    @inproceedings{lirolem1879,
           booktitle = {IEEE International Joint Conference on Neural Networks (IJCNN 2009), Atlanta, USA.},
               month = {June},
               title = {A modified sparse distributed memory model for extracting clean patterns from noisy inputs},
              author = {Hongying Meng and Kofi Appiah and Andrew Hunter and Shigang Yue and Mervyn Hobden and Nigel Priestley and Peter Hobden and Cy Pettit},
                year = {2009},
            keywords = {ARRAY(0x7fdc7801e578)},
                 url = {http://eprints.lincoln.ac.uk/1879/},
            abstract = {Abstract{--}The Sparse Distributed Memory (SDM) proposed by Kanerva provides a simple model for human long-term memory, with a strong underlying mathematical theory. However, there are problematic features in the original SDM model that affect its efficiency and performance in real world applications and for hardware implementation. In this paper, we propose modifications to the SDM model that improve its efficiency and performance in pattern recall. First, the address matrix is built using training samples rather than random binary sequences. This improves the recall performance significantly. Second, the content matrix is modified using a simple tri-state logic rule. This reduces the storage requirements of the SDM and simplifies the implementation logic, making it suitable for hardware implementation. The modified model has been tested using pattern recall experiments. It is found that the modified model can recall clean patterns very well from noisy inputs.}
    }
  • H. Meng, S. Yue, A. Hunter, K. Appiah, M. Hobden, N. Priestley, P. Hobden, and C. Pettit, “A modified neural network model for Lobula Giant Movement Detector with additional depth movement feature,” in IEEE International Joint Conference on Neural Networks (IJCNN 2009), Atlanta, USA., 2009.
    [BibTeX] [Abstract] [EPrints]

    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron that is located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of the approaching object and its proximity. It has been found that it can respond to looming stimuli very quickly and can trigger avoidance reactions whenever a rapidly approaching object is detected. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper proposes a modified LGMD model that provides additional movement depth direction information. The proposed model retains the simplicity of the previous neural network model, adding only a few new cells. It has been tested on both simulated and recorded video data sets. The experimental results shows that the modified model can very efficiently provide stable information on the depth direction of movement.

    @inproceedings{lirolem1971,
           booktitle = {IEEE International Joint Conference on Neural Networks (IJCNN 2009), Atlanta, USA.},
               month = {June},
               title = {A modified neural network model for Lobula Giant Movement Detector with additional depth movement feature},
              author = {Hongying Meng and Shigang Yue and Andrew Hunter and Kofi Appiah and Mervyn Hobden and Nigel Priestley and Peter Hobden and Cy Pettit},
                year = {2009},
                note = {The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron that is located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of the approaching object and its proximity. It has been found that it can respond to looming stimuli very quickly and can trigger avoidance reactions whenever a rapidly approaching object is detected. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper proposes a modified LGMD model that provides additional movement depth direction information. The proposed model retains the simplicity of the previous neural network model, adding only a few new cells. It has been tested on both simulated and recorded video data sets. The experimental results shows that the modified model can very efficiently provide stable information on the depth direction of movement.},
            keywords = {ARRAY(0x7fdc7811dd98)},
                 url = {http://eprints.lincoln.ac.uk/1971/},
            abstract = {The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron that is located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of the approaching object and its proximity. It has been found that it can respond to looming stimuli very quickly and can trigger avoidance reactions whenever a rapidly approaching object is detected. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper proposes a modified LGMD model that provides additional movement depth direction information. The proposed model retains the simplicity of the previous neural network model, adding only a few new cells. It has been tested on both simulated and recorded video data sets. The experimental results shows that the modified model can very efficiently provide stable information on the depth direction of movement.}
    }
  • C. Mertes, A. Dierker, T. Hermann, M. Hanheide, and G. Sagerer, “Enhancing human cooperation with multimodal augmented reality,” in Proceedings of the 13th International Conference on Human-Computer Interaction, 2009, pp. 447-451.
    [BibTeX] [Abstract] [EPrints]

    Humans naturally use an impressive variety of ways to com- municate. In this work, we investigate the possibilities of complementing these natural communication channels with articial ones. For this, augmented reality is used as a technique to add synthetic visual and auditory stimuli to people’s perception. A system for the mutual display of the gaze direction of two interactants is presented and its acceptance is shown through a study. Finally, future possibilities of promoting this novel concept of articial communication channels are explored

    @inproceedings{lirolem6918,
               month = {July},
              author = {Christian Mertes and Angelika Dierker and Thomas Hermann and Marc Hanheide and Gerhard Sagerer},
                note = {Humans naturally use an impressive variety of ways to com-
    municate. In this work, we investigate the possibilities of complementing these natural communication channels with articial ones. For this, augmented reality is used as a technique to add synthetic visual and auditory stimuli to people's perception. A system for the mutual display
    of the gaze direction of two interactants is presented and its acceptance is shown through a study. Finally, future possibilities of promoting this novel concept of articial communication channels are explored},
           booktitle = {Proceedings of the 13th International Conference on Human-Computer Interaction},
              editor = {B. Gottfried and H. Aghajan},
               title = {Enhancing human cooperation with multimodal augmented reality},
           publisher = {Springer},
                year = {2009},
               pages = {447--451},
            keywords = {ARRAY(0x7fdc7803f9e0)},
                 url = {http://eprints.lincoln.ac.uk/6918/},
            abstract = {Humans naturally use an impressive variety of ways to com-
    municate. In this work, we investigate the possibilities of complementing these natural communication channels with articial ones. For this, augmented reality is used as a technique to add synthetic visual and auditory stimuli to people's perception. A system for the mutual display
    of the gaze direction of two interactants is presented and its acceptance is shown through a study. Finally, future possibilities of promoting this novel concept of articial communication channels are explored}
    }
  • C. Ooi, E. Bullmore, A. Wink, L. Sendur, A. Barnes, S. Achard, J. Aspden, S. Abbott, S. Yue, M. Kitzbichler, D. Meunier, V. Maxim, R. Salvador, J. Henty, R. Tait, N. Subramaniam, and J. Suckling, “CamBAfx: workflow design, implementation and application for neuroimaging,” Frontiers in Neuroinformatics, vol. 3, pp. 1-10, 2009.
    [BibTeX] [Abstract] [EPrints]

    CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workfl ows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx?s pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workfl ow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workfl ow facility around neuroinformatics software without modifi cation. CamBAfx?s design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs.

    @article{lirolem2666,
              volume = {3},
               month = {August},
              author = {Cinly Ooi and Edward Bullmore and Alle-Meije Wink and Levent Sendur and Anna Barnes and Sophie Achard and John Aspden and Sanja Abbott and Shigang Yue and Manfred Kitzbichler and David Meunier and Voichita Maxim and Raymond Salvador and Julian Henty and Roger Tait and Naresh Subramaniam and John Suckling},
                note = {CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user
    interface) optimized for data processing designed in a way familiar to consumers. The back-end
    uses a pipeline model to represent workfl ows since this is a common and useful metaphor used
    by designers and is easy to manipulate compared to other representations like programming
    scripts. As an Eclipse Rich Client Platform application, CamBAfx?s pipelines and functions can
    be bundled with the software or downloaded post-installation. The user interface contains all the
    workfl ow facilities expected by consumers. Using the Eclipse Extension Mechanism designers
    are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workfl ow
    facility around neuroinformatics software without modifi cation. CamBAfx?s design, licensing
    and Eclipse Branding Mechanism allow it to be used as the user interface for other software,
    facilitating exchange of innovative computational tools between originating labs.},
               title = {CamBAfx: workflow design, implementation and application for neuroimaging},
           publisher = {Frontiers Research Foundation},
                year = {2009},
             journal = {Frontiers in Neuroinformatics},
               pages = {1--10},
            keywords = {ARRAY(0x7fdc780ad338)},
                 url = {http://eprints.lincoln.ac.uk/2666/},
            abstract = {CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user
    interface) optimized for data processing designed in a way familiar to consumers. The back-end
    uses a pipeline model to represent workfl ows since this is a common and useful metaphor used
    by designers and is easy to manipulate compared to other representations like programming
    scripts. As an Eclipse Rich Client Platform application, CamBAfx?s pipelines and functions can
    be bundled with the software or downloaded post-installation. The user interface contains all the
    workfl ow facilities expected by consumers. Using the Eclipse Extension Mechanism designers
    are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workfl ow
    facility around neuroinformatics software without modifi cation. CamBAfx?s design, licensing
    and Eclipse Branding Mechanism allow it to be used as the user interface for other software,
    facilitating exchange of innovative computational tools between originating labs.}
    }
  • J. Peltason, I. Lütkebohle, B. Wrede, and M. Hanheide, “Mixed initiative in interactive robotic learning,” in Mixed Initiative Workshop on Improving Human-Robot Communication with Mixed-Initiative and Context-Awareness at the 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009.
    [BibTeX] [Abstract] [EPrints]

    In learning tasks, interaction is mostly about the exchange of knowledge. The interaction process shall be governed on the one hand by the knowledge the tutor wants to convey and on the other by the lacks of knowledge of the learner. In human-robot interaction (HRI), it is usually the human demonstrating or explicitly verbalizing her knowl- edge and the robot acquiring a respective representation. The ultimate goal in interactive robot learning is thus to enable inexperienced, un- trained users to tutor robots in a most natural and intuitive manner. This goal is often impeded by a lack of knowledge of the human about the internal processing and expectations of the robot and by the inflexibility of the robot to understand open-ended, unconstrained tutoring or demonstration. Hence, we propose mixed-initiative strategies to allow both to mutually contribute to the interactive learning process as a bi-directional negotiation about knowledge. Along this line this paper discusses two initially different case studies on object manipulation and learning of spatial environments. We present different styles of mixed- initiative in these scenarios and discuss the merits in each case.

    @inproceedings{lirolem6923,
               month = {September},
              author = {Julia Peltason and Ingo L{\"u}tkebohle and Britta Wrede and Marc Hanheide},
                note = {In learning tasks, interaction is mostly about the exchange
    of knowledge. The interaction process shall be governed on the one hand by the knowledge the tutor wants to convey and on the other by the lacks of knowledge of the learner. In human-robot interaction (HRI), it is usually the human demonstrating or explicitly verbalizing her knowl-
    edge and the robot acquiring a respective representation. The ultimate goal in interactive robot learning is thus to enable inexperienced, un- trained users to tutor robots in a most natural and intuitive manner.
    This goal is often impeded by a lack of knowledge of the human about the internal processing and expectations of the robot and by the inflexibility of the robot to understand open-ended, unconstrained tutoring or demonstration. Hence, we propose mixed-initiative strategies to allow both to mutually contribute to the interactive learning process as
    a bi-directional negotiation about knowledge. Along this line this paper discusses two initially different case studies on object manipulation and learning of spatial environments. We present different styles of mixed-
    initiative in these scenarios and discuss the merits in each case.},
           booktitle = {Mixed Initiative Workshop on Improving Human-Robot Communication with Mixed-Initiative and Context-Awareness at the 18th IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Mixed initiative in interactive robotic learning},
           publisher = {IEEE},
                year = {2009},
            keywords = {ARRAY(0x7fdc780eb5b8)},
                 url = {http://eprints.lincoln.ac.uk/6923/},
            abstract = {In learning tasks, interaction is mostly about the exchange
    of knowledge. The interaction process shall be governed on the one hand by the knowledge the tutor wants to convey and on the other by the lacks of knowledge of the learner. In human-robot interaction (HRI), it is usually the human demonstrating or explicitly verbalizing her knowl-
    edge and the robot acquiring a respective representation. The ultimate goal in interactive robot learning is thus to enable inexperienced, un- trained users to tutor robots in a most natural and intuitive manner.
    This goal is often impeded by a lack of knowledge of the human about the internal processing and expectations of the robot and by the inflexibility of the robot to understand open-ended, unconstrained tutoring or demonstration. Hence, we propose mixed-initiative strategies to allow both to mutually contribute to the interactive learning process as
    a bi-directional negotiation about knowledge. Along this line this paper discusses two initially different case studies on object manipulation and learning of spatial environments. We present different styles of mixed-
    initiative in these scenarios and discuss the merits in each case.}
    }
  • J. Peltason, F. H. K. Siepmann, T. P. Spexard, B. Wrede, M. Hanheide, and E. A. Topp, “Mixed-initiative in human augmented mapping ,” in IEEE International Conference on Robotics and Automation., 2009, pp. 2146-2153.
    [BibTeX] [Abstract] [EPrints]

    In scenarios that require a close collaboration and knowledge transfer between inexperienced users and robots, the ?learning by interacting? paradigm goes hand in hand with appropriate representations and learning methods. In this paper we discuss a mixed initiative strategy for robotic learning by interacting with a user in a joint map acquisition process. We propose the integration of an environment representation approach into our interactive learning framework. The environment representation and mapping system supports both user driven and data driven strategies for the acquisition of spatial information, so that a mixed initiative strategy for the learning process is realised. We evaluate our system with test runs according to the scenario of a guided tour, extending the area of operation from structured laboratory environment to less predictable domestic settings

    @inproceedings{lirolem6924,
               month = {May},
              author = {Julia Peltason and F. H. K. Siepmann and T. P. Spexard and Britta Wrede and Marc Hanheide and E. A. Topp},
                note = {In scenarios that require a close collaboration and
    knowledge transfer between inexperienced users and robots,
    the ?learning by interacting? paradigm goes hand in hand
    with appropriate representations and learning methods. In this paper we discuss a mixed initiative strategy for robotic learning by interacting with a user in a joint map acquisition process.
    We propose the integration of an environment representation
    approach into our interactive learning framework. The environment representation and mapping system supports both
    user driven and data driven strategies for the acquisition of spatial information, so that a mixed initiative strategy for the learning process is realised. We evaluate our system with test runs according to the scenario of a guided tour, extending the area of operation from structured laboratory environment to
    less predictable domestic settings},
           booktitle = {IEEE International Conference on Robotics and Automation.},
              editor = {B. Gottfried and H. Aghajan},
               title = {Mixed-initiative in human augmented mapping
    },
           publisher = {IEEE},
                year = {2009},
               pages = {2146--2153},
            keywords = {ARRAY(0x7fdc78041e00)},
                 url = {http://eprints.lincoln.ac.uk/6924/},
            abstract = {In scenarios that require a close collaboration and
    knowledge transfer between inexperienced users and robots,
    the ?learning by interacting? paradigm goes hand in hand
    with appropriate representations and learning methods. In this paper we discuss a mixed initiative strategy for robotic learning by interacting with a user in a joint map acquisition process.
    We propose the integration of an environment representation
    approach into our interactive learning framework. The environment representation and mapping system supports both
    user driven and data driven strategies for the acquisition of spatial information, so that a mixed initiative strategy for the learning process is realised. We evaluate our system with test runs according to the scenario of a guided tour, extending the area of operation from structured laboratory environment to
    less predictable domestic settings}
    }
  • A. Peters, T. P. Spexard, P. Weiß, and M. Hanheide, “Make room for me: a spatial and situational movement concept in HRI,” in Workshop on Behavior Monitoring and Interpretation – Well Being, 2009.
    [BibTeX] [Abstract] [EPrints]

    Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households, soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated Human-Robot Interaction (HRI)

    @inproceedings{lirolem6921,
           booktitle = {Workshop on Behavior Monitoring and Interpretation - Well Being},
              editor = {B. Gottfried and H. Aghajan},
               month = {September},
               title = {Make room for me: a spatial and situational movement concept in HRI},
              author = {Annika Peters and Thorsten P. Spexard and Petra Wei{\ss} and Marc Hanheide},
                year = {2009},
                note = {Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households, soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated Human-Robot Interaction (HRI)},
            keywords = {ARRAY(0x7fdc78186190)},
                 url = {http://eprints.lincoln.ac.uk/6921/},
            abstract = {Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households, soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated Human-Robot Interaction (HRI)}
    }
  • A. Peters, P. Weiss, and M. Hanheide, “Avoid me: a spatial movement concept in human-robot interaction,” Cognitive Processing, vol. 10, iss. S2, pp. 177-178, 2009.
    [BibTeX] [Abstract] [EPrints]

    In humanhuman interaction, social signals or unconscious cues are sent and received by interaction partners. These signals and cues influence the interaction partner wanted and unwantedsometimes to achieve a distinct goal. To be aware of those signals and especially of implicit cues is crucial when interaction between a robot and a human is modelled. This project aims to use implicit body and machine movements to make HRI smoother and simpler. A robot should not only consider social rules with respect to proxemics in communication or in encounter people. It should also be able to signal and understand certain spatial constraints. A first spatial and situational constraint, which this research project currently focuses on, is avoiding each other. This includes not only passing by but also especially making room for each other. Consider a narrow place, e.g. hallways, door frames or a small kitchen. A robot might block the way or drives towards you, pursuing its own goal like you. Humans do not even speak to each other in order to pass by and avoid bumping into each other even if the space is narrow. A first study is currently conducted to find out which behaviour is the most appropriate avoiding strategy and how participants express their wish to pass by. Therefore, a variety of defensive and more offensive avoiding strategies of the robot are applied in experiments. The results of the study will be used to equip the robot with spatial concepts to make interaction faster and more appropriate.

    @article{lirolem6702,
              volume = {10},
              number = {S2},
               month = {September},
              author = {Annika Peters and Petra Weiss and Marc Hanheide},
                note = {In humanhuman interaction, social signals or unconscious cues are sent and received by interaction partners. These signals and cues influence the interaction partner wanted and unwantedsometimes to achieve a distinct goal. To be aware of those signals and especially of implicit cues is crucial when interaction between a robot and a human is modelled. This project aims to use implicit body and machine movements to make HRI smoother and simpler. A robot should not only consider social rules with respect to proxemics in communication or in encounter people. It should also be able to signal and understand certain spatial constraints. A first spatial and situational constraint, which this research project currently focuses on, is avoiding each other. This includes not only passing by but also especially making room for each other. Consider a narrow place, e.g. hallways, door frames or a small kitchen. A robot might block the way or drives towards you, pursuing its own goal like you. Humans do not even speak to each other in order to pass by and avoid bumping into each other even if the space is narrow. A first study is currently conducted to find out which behaviour is the most appropriate avoiding strategy and how participants express their wish to pass by. Therefore, a variety of defensive and more offensive avoiding strategies of the robot are applied in experiments. The results of the study will be used to equip the robot with spatial concepts to make interaction faster and more appropriate.},
               title = {Avoid me: a spatial movement concept in human-robot interaction},
           publisher = {Springer},
                year = {2009},
             journal = {Cognitive Processing},
               pages = {177--178},
            keywords = {ARRAY(0x7fdc7811e320)},
                 url = {http://eprints.lincoln.ac.uk/6702/},
            abstract = {In humanhuman interaction, social signals or unconscious cues are sent and received by interaction partners. These signals and cues influence the interaction partner wanted and unwantedsometimes to achieve a distinct goal. To be aware of those signals and especially of implicit cues is crucial when interaction between a robot and a human is modelled. This project aims to use implicit body and machine movements to make HRI smoother and simpler. A robot should not only consider social rules with respect to proxemics in communication or in encounter people. It should also be able to signal and understand certain spatial constraints. A first spatial and situational constraint, which this research project currently focuses on, is avoiding each other. This includes not only passing by but also especially making room for each other. Consider a narrow place, e.g. hallways, door frames or a small kitchen. A robot might block the way or drives towards you, pursuing its own goal like you. Humans do not even speak to each other in order to pass by and avoid bumping into each other even if the space is narrow. A first study is currently conducted to find out which behaviour is the most appropriate avoiding strategy and how participants express their wish to pass by. Therefore, a variety of defensive and more offensive avoiding strategies of the robot are applied in experiments. The results of the study will be used to equip the robot with spatial concepts to make interaction faster and more appropriate.}
    }
  • A. Rabie, B. Wrede, T. Vogt, and M. Hanheide, “Evaluation and discussion of multi-modal emotion recognition,” in ICCEE ’09. Second International Conference on Computer and Electrical Engineering, Dubai, 2009, pp. 598-602.
    [BibTeX] [Abstract] [EPrints]

    Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine and human-robot interaction. It provides a means to incorporate non-verbal feedback in the interactional course. Humans express their emotional state rather unconsciously exploiting their different natural communication modalities. In this paper, we present a first study on multimodal recognition of emotions from auditive and visual cues for interaction interfaces. We recognize seven classes of basic emotions by means of visual analysis of talking faces. In parallel, the audio signal is analyzed on the basis of the intonation of the verbal articulation. We compare the performance of state of the art recognition systems on the DaFEx database for both complement modalities and discuss these results with regard to the theoretical background and possible fusion schemes in real-world multimodal interfaces. Â\copyright 2009 IEEE.

    @inproceedings{lirolem8325,
              volume = {1},
              author = {A. Rabie and B. Wrede and T. Vogt and M. Hanheide},
                note = {Conference Code: 79725},
           booktitle = {ICCEE '09. Second International Conference on Computer and Electrical Engineering},
             address = {Dubai},
               title = {Evaluation and discussion of multi-modal emotion recognition},
           publisher = {IEEE},
                year = {2009},
             journal = {2009 International Conference on Computer and Electrical Engineering, ICCEE 2009},
               pages = {598--602},
            keywords = {ARRAY(0x7fdc78121aa0)},
                 url = {http://eprints.lincoln.ac.uk/8325/},
            abstract = {Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine and human-robot interaction. It provides a means to incorporate non-verbal feedback in the interactional course. Humans express their emotional state rather unconsciously exploiting their different natural communication modalities. In this paper, we present a first study on multimodal recognition of emotions from auditive and visual cues for interaction interfaces. We recognize seven classes of basic emotions by means of visual analysis of talking faces. In parallel, the audio signal is analyzed on the basis of the intonation of the verbal articulation. We compare the performance of state of the art recognition systems on the DaFEx database for both complement modalities and discuss these results with regard to the theoretical background and possible fusion schemes in real-world multimodal interfaces. {\^A}{\copyright} 2009 IEEE.}
    }
  • M. Rolf, M. Hanheide, and K. J. Rohlfing, “The use of synchrony in parent-child interaction can be measured on a signal-level,” in SRCD 2009 Biennial Meeting, 2009.
    [BibTeX] [Abstract] [EPrints]

    In our approach, we aim at an objective measurement of synchrony in multimodal tutoring behavior. The use of signal correlation provides a well formalized method that yields gradual information about the degree of synchrony. For our analysis, we used and extended an algorithm proposed by Hershey & Movellan (2000) that correlates single-pixel values of a video signal with the loudness of the corresponding audio track over time. The results of all pixels are integrated over the video to achieve a scalar estimate of synchrony.

    @inproceedings{lirolem6927,
               month = {April},
              author = {Matthias Rolf and Marc Hanheide and Katharina J. Rohlfing},
                note = {In our approach, we aim at an objective measurement of
    synchrony in multimodal tutoring behavior. The use of signal
    correlation provides a well formalized method that yields
    gradual information about the degree of synchrony. For our
    analysis, we used and extended an algorithm proposed by
    Hershey \& Movellan (2000) that correlates single-pixel
    values of a video signal with the loudness of the
    corresponding audio track over time. The results of all pixels are integrated over the video to achieve a scalar estimate of synchrony.},
           booktitle = {SRCD 2009 Biennial Meeting},
              editor = {B. Gottfried and H. Aghajan},
               title = {The use of synchrony in parent-child interaction can be measured on a signal-level},
           publisher = {Society for Research in Child Development},
                year = {2009},
            keywords = {ARRAY(0x7fdc78146e50)},
                 url = {http://eprints.lincoln.ac.uk/6927/},
            abstract = {In our approach, we aim at an objective measurement of
    synchrony in multimodal tutoring behavior. The use of signal
    correlation provides a well formalized method that yields
    gradual information about the degree of synchrony. For our
    analysis, we used and extended an algorithm proposed by
    Hershey \& Movellan (2000) that correlates single-pixel
    values of a video signal with the loudness of the
    corresponding audio track over time. The results of all pixels are integrated over the video to achieve a scalar estimate of synchrony.}
    }
  • M. Rolf, M. Hanheide, and K. J. Rohfling, “Attention via synchrony: making use of multimodal cues in social learning,” Autonomous Mental Development, IEEE Transactions on, vol. 1, iss. 1, pp. 55-67, 2009.
    [BibTeX] [Abstract] [EPrints]

    Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal ldquobindings,rdquo infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object’s name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult-infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of ldquomultimodal mothereserdquo is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.

    @article{lirolem6700,
              volume = {1},
              number = {1},
               month = {May},
              author = {Matthias Rolf and Marc Hanheide and Katharina J. Rohfling},
                note = {Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal ldquobindings,rdquo infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object's name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult-infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of ldquomultimodal mothereserdquo is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.},
               title = {Attention via synchrony: making use of multimodal cues in social learning},
           publisher = {IEEE},
                year = {2009},
             journal = {Autonomous Mental Development, IEEE Transactions on},
               pages = {55--67},
            keywords = {ARRAY(0x7fdc78038bf0)},
                 url = {http://eprints.lincoln.ac.uk/6700/},
            abstract = {Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal ldquobindings,rdquo infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object's name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult-infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of ldquomultimodal mothereserdquo is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.}
    }
  • M. Shaker, S. Yue, and T. Duckett, “Vision-based reinforcement learning using approximate policy iteration,” in 14th International Conference on Advanced Robotics (ICAR), 2009, p. 0-6.
    [BibTeX] [Abstract] [EPrints]

    A major issue for reinforcement learning (RL) applied to robotics is the time required to learn a new skill. While RL has been used to learn mobile robot control in many simulated domains, applications involving learning on real robots are still relatively rare. In this paper, the Least-Squares Policy Iteration (LSPI) reinforcement learning algorithm and a new model-based algorithm Least-Squares Policy Iteration with Prioritized Sweeping (LSPI+), are implemented on a mobile robot to acquire new skills quickly and efficiently. LSPI+ combines the benefits of LSPI and prioritized sweeping, which uses all previous experience to focus the computational effort on the most ?interesting? or dynamic parts of the state space. The proposed algorithms are tested on a household vacuum cleaner robot for learning a docking task using vision as the only sensor modality. In experiments these algorithms are compared to other model-based and model-free RL algorithms. The results show that the number of trials required to learn the docking task is significantly reduced using LSPI compared to the other RL algorithms investigated, and that LSPI+ further improves on the performance of LSPI.

    @inproceedings{lirolem2049,
           booktitle = {14th International Conference on Advanced Robotics (ICAR)},
               title = {Vision-based reinforcement learning using approximate policy
    iteration},
              author = {Marwan Shaker and Shigang Yue and Tom Duckett},
                year = {2009},
               pages = {0--6},
                note = {A major issue for reinforcement learning (RL) applied to robotics is the time required to learn a new skill. While RL has been used to learn mobile robot control in many simulated domains, applications involving learning on real
    robots are still relatively rare. In this paper, the Least-Squares Policy Iteration (LSPI) reinforcement learning algorithm and a new model-based algorithm Least-Squares Policy Iteration with Prioritized Sweeping (LSPI+), are implemented on a mobile robot to acquire new skills quickly and efficiently. LSPI+ combines the benefits of LSPI and prioritized sweeping, which uses all previous experience to focus the computational effort on the most ?interesting? or dynamic parts of the state space. 
    The proposed algorithms are tested on a household vacuum
    cleaner robot for learning a docking task using vision as the only sensor modality. In experiments these algorithms are compared to other model-based and model-free RL algorithms. The results show that the number of trials required to learn the docking task is significantly reduced using LSPI compared to the other RL algorithms investigated, and that LSPI+ further improves on the performance of LSPI.},
            keywords = {ARRAY(0x7fdc7814e7f0)},
                 url = {http://eprints.lincoln.ac.uk/2049/},
            abstract = {A major issue for reinforcement learning (RL) applied to robotics is the time required to learn a new skill. While RL has been used to learn mobile robot control in many simulated domains, applications involving learning on real
    robots are still relatively rare. In this paper, the Least-Squares Policy Iteration (LSPI) reinforcement learning algorithm and a new model-based algorithm Least-Squares Policy Iteration with Prioritized Sweeping (LSPI+), are implemented on a mobile robot to acquire new skills quickly and efficiently. LSPI+ combines the benefits of LSPI and prioritized sweeping, which uses all previous experience to focus the computational effort on the most ?interesting? or dynamic parts of the state space. 
    The proposed algorithms are tested on a household vacuum
    cleaner robot for learning a docking task using vision as the only sensor modality. In experiments these algorithms are compared to other model-based and model-free RL algorithms. The results show that the number of trials required to learn the docking task is significantly reduced using LSPI compared to the other RL algorithms investigated, and that LSPI+ further improves on the performance of LSPI.}
    }
  • T. P. Spexard and M. Hanheide, “System integration supporting evolutionary development and design,” in Conference on Human Centered Robotic Systems, 2009, pp. 1-9.
    [BibTeX] [Abstract] [EPrints]

    Abstract With robotic systems entering our daily life, they have to become more flexible and subsuming a multitude of abilities in one single integrated system. Subsequently an increased extensibility of the robots? system architectures is needed. The goal is to facilitate a long-time evolution of the integrated system in-line with the scientific progress on the algorithmic level. In this paper we present an approach developed for an event-driven robot architecture, focussing on the coordination and interplay of new abilities and components. Appropriate timing, sequencing strategies, execution guaranties, and process flow synchronization are taken into account to allow appropriate arbitration and interaction between components as well as between the integrated system and the user. The presented approach features dynamic reconfiguration and global coordination based on simple production rules. These are applied fist time in conjunction with flexible representations in global memory spaces and an event-driven architecture. As a result a highly adaptive robot control compared to alternative approaches is achieved, allowing system modification during runtime even within complex interactive human-robot scenarios

    @inproceedings{lirolem6925,
               month = {November},
              author = {Thorsten P. Spexard and Marc Hanheide},
                note = {Abstract With robotic systems entering our daily life, they have to become more flexible and subsuming a multitude of abilities in one single integrated system. Subsequently an increased extensibility of the robots? system architectures is needed. 
    The goal is to facilitate a long-time evolution of the integrated system in-line with the scientific progress on the algorithmic level. In this paper we present an approach developed for an event-driven robot architecture, focussing on the coordination and interplay of new abilities and components. Appropriate timing, sequencing strategies, execution guaranties, and process flow synchronization are taken into account to allow appropriate arbitration and interaction between components as well as between the integrated system and the user. The presented approach features dynamic reconfiguration and global coordination based on simple production rules. These are applied fist time in conjunction with flexible representations in global memory spaces and an event-driven architecture. As a result a highly adaptive robot control compared to alternative approaches is achieved, allowing system modification during runtime even within complex interactive human-robot scenarios},
           booktitle = {Conference on Human Centered Robotic Systems},
              editor = {B. Gottfried and H. Aghajan},
               title = {System integration supporting evolutionary development and design},
           publisher = {Springer},
                year = {2009},
               pages = {1--9},
            keywords = {ARRAY(0x7fdc781a63f8)},
                 url = {http://eprints.lincoln.ac.uk/6925/},
            abstract = {Abstract With robotic systems entering our daily life, they have to become more flexible and subsuming a multitude of abilities in one single integrated system. Subsequently an increased extensibility of the robots? system architectures is needed. 
    The goal is to facilitate a long-time evolution of the integrated system in-line with the scientific progress on the algorithmic level. In this paper we present an approach developed for an event-driven robot architecture, focussing on the coordination and interplay of new abilities and components. Appropriate timing, sequencing strategies, execution guaranties, and process flow synchronization are taken into account to allow appropriate arbitration and interaction between components as well as between the integrated system and the user. The presented approach features dynamic reconfiguration and global coordination based on simple production rules. These are applied fist time in conjunction with flexible representations in global memory spaces and an event-driven architecture. As a result a highly adaptive robot control compared to alternative approaches is achieved, allowing system modification during runtime even within complex interactive human-robot scenarios}
    }
  • T. P. Spexard and M. Hanheide, “System integration supporting evolutionary development and design,” in Human centered robot systems: cognition, interaction, technology , Springer Berlin Heidelberg, 2009, pp. 1-9.
    [BibTeX] [Abstract] [EPrints]

    With robotic systems entering our daily life, they have to become more flexible and subsuming a multitude of abilities in one single integrated system. Sub- sequently an increased extensibility of the robots? system architectures is needed. The goal is to facilitate a long-time evolution of the integrated system in-line with the scientific progress on the algorithmic level. In this paper we present an approach developed for an event-driven robot architecture, focussing on the coordination and interplay of new abilities and components. Appropriate timing, sequencing strategies, execution guaranties, and process flow synchronisation are taken into account to allow appropriate arbitration and interaction between components as well as between the integrated system and the user. The presented approach features dynamic reconfiguration and global coordination based on simple production rules. These are applied first time in conjunction with flexible representations in global memory spaces and an event-driven architecture. As a result a highly adaptive robot control compared to alternative approaches is achieved, allowing system modification during runtime even within complex interactive human-robot scenarios.

    @incollection{lirolem11964,
              number = {6},
               month = {November},
              author = {Thorsten P. Spexard and Marc Hanheide},
              series = {Cognitive Systems Monographs},
           booktitle = {Human centered robot systems: cognition, interaction, technology },
               title = {System integration supporting evolutionary development and design},
           publisher = {Springer Berlin Heidelberg},
                year = {2009},
               pages = {1--9},
            keywords = {ARRAY(0x7fdc7803ac38)},
                 url = {http://eprints.lincoln.ac.uk/11964/},
            abstract = {With robotic systems entering our daily life, they have to become more flexible and subsuming a multitude of abilities in one single integrated system. Sub- sequently an increased extensibility of the robots? system architectures is needed. The goal is to facilitate a long-time evolution of the integrated system in-line with the scientific progress on the algorithmic level. In this paper we present an approach developed for an event-driven robot architecture, focussing on the coordination and interplay of new abilities and components. Appropriate timing, sequencing strategies, execution guaranties, and process flow synchronisation are taken into account to allow appropriate arbitration and interaction between components as well as between the integrated system and the user. The presented approach features dynamic reconfiguration and global coordination based on simple production rules. These are applied first time in conjunction with flexible representations in global memory spaces and an event-driven architecture. As a result a highly adaptive robot control compared to alternative approaches is achieved, allowing system modification during runtime even within complex interactive human-robot scenarios. }
    }
  • B. Wrede, K. J. Rohlfing, M. Hanheide, and G. Sagerer, “Towards learning by interacting,” in Creating brain-like intelligence: from basic principles to complex intelligent systems, B. Sendhoff, E. Korner, O. Sporns, H. Ritter, and K. Doya, Eds., Springer, 2009.
    [BibTeX] [Abstract] [EPrints]

    Abstract

    @incollection{lirolem6715,
              number = {5436},
               month = {November},
              author = {Britte Wrede and Katharina J. Rohlfing and Marc Hanheide and Gerhard Sagerer},
              series = {Lecture Notes in Computer Science},
                note = {Abstract},
           booktitle = {Creating brain-like intelligence: from basic principles to complex intelligent systems},
              editor = {Bernhard Sendhoff and Edgar Korner and Olaf Sporns and Helge Ritter and Kenji Doya},
               title = {Towards learning by interacting},
           publisher = {Springer},
                year = {2009},
            keywords = {ARRAY(0x7fdc7803ac80)},
                 url = {http://eprints.lincoln.ac.uk/6715/},
            abstract = {Abstract}
    }
  • F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, and S. Wachsmuth, “Laser-based navigation enhanced with 3D time-of-flight data,” in Conference of 2009 IEEE International Conference on Robotics and Automation, ICRA ’09, Kobe, 2009, pp. 2844-2850.
    [BibTeX] [Abstract] [EPrints]

    Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to penetrate highly dynamic and populated spaces, such as people’s home, and move around smoothly. However, in an unconstrained environment the twodimensional perceptual space of a fixed mounted laser is not sufficient to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Timeof-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a &quot;virtual laser&quot;. For the originally solely laserbased motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100\% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan. Â\copyright 2009 IEEE.

    @inproceedings{lirolem7218,
               month = {May},
              author = {Fang Yuan and Agnes Swadzba and Roland Philippsen and Orhan Engin and Marc Hanheide and Sven Wachsmuth},
                note = {Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to penetrate highly dynamic and populated spaces, such as people's home, and move around smoothly. However, in an unconstrained environment the twodimensional perceptual space of a fixed mounted laser is not sufficient to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Timeof-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a \&quot;virtual laser\&quot;. For the originally solely laserbased motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100\% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan. {\^A}{\copyright} 2009 IEEE.},
           booktitle = {Conference of 2009 IEEE International Conference on Robotics and Automation, ICRA '09},
               title = {Laser-based navigation enhanced with 3D time-of-flight data},
             address = {Kobe},
           publisher = {IEEE},
                year = {2009},
             journal = {Proceedings - IEEE International Conference on Robotics and Automation},
               pages = {2844--2850},
            keywords = {ARRAY(0x7fdc78169500)},
                 url = {http://eprints.lincoln.ac.uk/7218/},
            abstract = {Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to penetrate highly dynamic and populated spaces, such as people's home, and move around smoothly. However, in an unconstrained environment the twodimensional perceptual space of a fixed mounted laser is not sufficient to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Timeof-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a \&quot;virtual laser\&quot;. For the originally solely laserbased motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100\% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan. {\^A}{\copyright} 2009 IEEE.}
    }
  • S. Yue and C. F. Rind, “Near range path navigation using LGMD visual neural networks,” in 2009 2nd IEEE International Conference on Computer Science and Information Technology, 2009, Beijing, China, 2009.
    [BibTeX] [Abstract] [EPrints]

    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network ? lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios.

    @inproceedings{lirolem2670,
               month = {August},
              author = {Shigang Yue and F. Claire Rind},
              series = {2009 2nd IEEE International Conference on Computer Science and Information Technology, 2009},
                note = {(c) 2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users
    },
           booktitle = {2009 2nd IEEE International Conference on Computer Science and Information Technology, 2009},
             address = {Beijing, China},
               title = {Near range path navigation using LGMD visual neural networks},
           publisher = {IEEE},
                year = {2009},
            keywords = {ARRAY(0x7fdc78161b80)},
                 url = {http://eprints.lincoln.ac.uk/2670/},
            abstract = {In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically
    inspired visual neural network ? lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios.}
    }

2008

  • H. Andreasson, T. Duckett, and A. Lilienthal, “A minimalistic approach to appearance-based visual SLAM,” IEEE Transactions on Robotics, vol. 24, iss. 5, pp. 991-1001, 2008.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the ?flat floor assumption? to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.

    @article{lirolem2094,
              volume = {24},
              number = {5},
               month = {October},
              author = {Henrik Andreasson and Tom Duckett and Achim Lilienthal},
               title = {A minimalistic approach to appearance-based visual SLAM},
           publisher = {IEEE},
                year = {2008},
             journal = {IEEE Transactions on Robotics},
               pages = {991--1001},
            keywords = {ARRAY(0x7fdc7811e590)},
                 url = {http://eprints.lincoln.ac.uk/2094/},
            abstract = {This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a
    relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple
    view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the ?flat floor assumption? to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses.}
    }
  • C. Bauckhage, S. Wachsmuth, M. Hanheide, S. Wrede, G. Sagerer, G. Heidemann, and H. Ritter, “The visual active memory perspective on integrated recognition systems,” Image and Vision Computing, vol. 26, iss. 1, pp. 5-14, 2008.
    [BibTeX] [Abstract] [EPrints]

    Object recognition is the ability of a system to relate visual stimuli to its knowledge of the world. Although humans perform this task effortlessly and without thinking about it, a general algorithmic solution has not yet been found. Recently, a shift from devising isolated recognition techniques towards integrated systems could be observed [Y. Aloimonos, Active vision revisited, in: Y. Aloimonos (Ed.), Active Perception, Lawrence Efibaum, 1993, pp. 1?18; H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18]. The visual active memory (VAM) perspective refines this system view towards an interactive computational framework for recognition systems in human everyday environments. VAM is in line with the recently emerged Cognitive Vision paradigm [H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18] which is concerned with vision systems that evaluate, gather and integrate contextual knowledge for visual analysis. It consists of active processes that generate knowledge by means of a tight cooperation of perception, reasoning, learning and prior models. In addition, VAM emphasizes the dynamic representation of gathered knowledge. The memory is assumed to be structured in a hierarchy of successive memory systems that mediate the modularly defined processing components of the recognition system. Recognition and learning take place in the stress field of objects, actions, activities, scene context, and user interaction. In this paper, we exemplify the VAM perspective by means of existing demonstrator systems. Assuming three different perspectives (biological foundation, system engineering, and computer vision), we will show that the VAM concept is central to the cognitive capabilities of the system and that it leads to a more general object recognition framework.

    @article{lirolem6710,
              volume = {26},
              number = {1},
               month = {January},
              author = {Christian Bauckhage and Sven Wachsmuth and Marc Hanheide and S. Wrede and Gerhard Sagerer and G. Heidemann and H. Ritter},
                note = {Object recognition is the ability of a system to relate visual stimuli to its knowledge of the world. Although humans perform this task effortlessly and without thinking about it, a general algorithmic solution has not yet been found. Recently, a shift from devising isolated recognition techniques towards integrated systems could be observed [Y. Aloimonos, Active vision revisited, in: Y. Aloimonos (Ed.), Active Perception, Lawrence Efibaum, 1993, pp. 1?18; H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18]. The visual active memory (VAM) perspective refines this system view towards an interactive computational framework for recognition systems in human everyday environments. VAM is in line with the recently emerged Cognitive Vision paradigm [H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18] which is concerned with vision systems that evaluate, gather and integrate contextual knowledge for visual analysis. It consists of active processes that generate knowledge by means of a tight cooperation of perception, reasoning, learning and prior models. In addition, VAM emphasizes the dynamic representation of gathered knowledge. The memory is assumed to be structured in a hierarchy of successive memory systems that mediate the modularly defined processing components of the recognition system. Recognition and learning take place in the stress field of objects, actions, activities, scene context, and user interaction. In this paper, we exemplify the VAM perspective by means of existing demonstrator systems. Assuming three different perspectives (biological foundation, system engineering, and computer vision), we will show that the VAM concept is central to the cognitive capabilities of the system and that it leads to a more general object recognition framework.},
               title = {The visual active memory perspective on integrated recognition systems},
           publisher = {elsevier},
                year = {2008},
             journal = {Image and Vision Computing},
               pages = {5--14},
            keywords = {ARRAY(0x7fdc7811d6d8)},
                 url = {http://eprints.lincoln.ac.uk/6710/},
            abstract = {Object recognition is the ability of a system to relate visual stimuli to its knowledge of the world. Although humans perform this task effortlessly and without thinking about it, a general algorithmic solution has not yet been found. Recently, a shift from devising isolated recognition techniques towards integrated systems could be observed [Y. Aloimonos, Active vision revisited, in: Y. Aloimonos (Ed.), Active Perception, Lawrence Efibaum, 1993, pp. 1?18; H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18]. The visual active memory (VAM) perspective refines this system view towards an interactive computational framework for recognition systems in human everyday environments. VAM is in line with the recently emerged Cognitive Vision paradigm [H. Christensen, Cognitive (vision) systems, ERCIM News (April, 2003). 17?18] which is concerned with vision systems that evaluate, gather and integrate contextual knowledge for visual analysis. It consists of active processes that generate knowledge by means of a tight cooperation of perception, reasoning, learning and prior models. In addition, VAM emphasizes the dynamic representation of gathered knowledge. The memory is assumed to be structured in a hierarchy of successive memory systems that mediate the modularly defined processing components of the recognition system. Recognition and learning take place in the stress field of objects, actions, activities, scene context, and user interaction. In this paper, we exemplify the VAM perspective by means of existing demonstrator systems. Assuming three different perspectives (biological foundation, system engineering, and computer vision), we will show that the VAM concept is central to the cognitive capabilities of the system and that it leads to a more general object recognition framework.}
    }
  • N. Bellotto, K. Burn, E. Fletcher, and S. Wermter, “Appearance-based localization for mobile robots using digital zoom and visual compass,” Robotics and Autonomous Systems, vol. 56, iss. 2, pp. 143-156, 2008.
    [BibTeX] [Abstract] [EPrints]

    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally.

    @article{lirolem2103,
              volume = {56},
              number = {2},
              author = {Nicola Bellotto and Kevin Burn and Eric Fletcher and Stefan Wermter},
               title = {Appearance-based localization for mobile robots using digital zoom and visual compass},
           publisher = {Elsevier},
             journal = {Robotics and Autonomous Systems},
               pages = {143--156},
                year = {2008},
            keywords = {ARRAY(0x7fdc780ee5d8)},
                 url = {http://eprints.lincoln.ac.uk/2103/},
            abstract = {This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally.}
    }
  • O. Booij, B. Kröse, J. Peltason, T. P. Spexard, and M. Hanheide, “Moving from augmented to interactive mapping,” in Robotics: Science and Systems Workshop on Interactive Robot Learning, 2008.
    [BibTeX] [Abstract] [EPrints]

    Recently1 there has been a growing interest in human augmented mapping[1, 2]. That is: a mobile robot builds a low level spatial representation of the environment based on its sensor readings while a human provides labels for human concepts, such as rooms, which are then augmented or anchored to this representation or map [3]. Given such an augmented map the robot has the ability to communicate with the human about spatial concepts using the labels that the human understand. For instance, the robot could report it is in the ?kitchen?, instead of a set Cartesian coordinates which are probably meaningless to the human.

    @inproceedings{lirolem6933,
           booktitle = {Robotics: Science and Systems Workshop on Interactive Robot Learning},
              editor = {B. Gottfried and H. Aghajan},
               month = {June},
               title = {Moving from augmented to interactive mapping},
              author = {Olaf Booij and Ben Kr{\"o}se and Julia Peltason and Thorsten P. Spexard and Marc Hanheide},
                year = {2008},
                note = {Recently1 there has been a growing interest in human
    augmented mapping[1, 2]. That is: a mobile robot builds
    a low level spatial representation of the environment based
    on its sensor readings while a human provides labels for
    human concepts, such as rooms, which are then augmented
    or anchored to this representation or map [3]. Given such an
    augmented map the robot has the ability to communicate with
    the human about spatial concepts using the labels that the
    human understand. For instance, the robot could report it is
    in the ?kitchen?, instead of a set Cartesian coordinates which
    are probably meaningless to the human.},
            keywords = {ARRAY(0x7fdc78121728)},
                 url = {http://eprints.lincoln.ac.uk/6933/},
            abstract = {Recently1 there has been a growing interest in human
    augmented mapping[1, 2]. That is: a mobile robot builds
    a low level spatial representation of the environment based
    on its sensor readings while a human provides labels for
    human concepts, such as rooms, which are then augmented
    or anchored to this representation or map [3]. Given such an
    augmented map the robot has the ability to communicate with
    the human about spatial concepts using the labels that the
    human understand. For instance, the robot could report it is
    in the ?kitchen?, instead of a set Cartesian coordinates which
    are probably meaningless to the human.}
    }
  • F. Dayoub and T. Duckett, “An adaptive appearance-based map for long-term topological localization of mobile robots,” in International Conference on Intelligent Robots and Systems 2008, 2008, pp. 3364-3369.
    [BibTeX] [Abstract] [EPrints]

    This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor.

    @inproceedings{lirolem1679,
           booktitle = {International Conference on Intelligent Robots and Systems 2008},
               month = {September},
               title = {An adaptive appearance-based map for long-term topological localization of mobile robots},
              author = {Feras Dayoub and Tom Duckett},
           publisher = {IEEE},
                year = {2008},
               pages = {3364--3369},
            keywords = {ARRAY(0x7fdc781a6ea0)},
                 url = {http://eprints.lincoln.ac.uk/1679/},
            abstract = {This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor.}
    }
  • M. Hanheide, S. Wrede, C. Lang, and G. Sagerer, “Who am I talking with? A face memory for social robots,” in IEEE International Conference on Robotics and Automation, 2008, pp. 3660-3665.
    [BibTeX] [Abstract] [EPrints]

    In order to provide personalized services and to develop human-like interaction capabilities robots need to rec- ognize their human partner. Face recognition has been studied in the past decade exhaustively in the context of security systems and with significant progress on huge datasets. However, these capabilities are not in focus when it comes to social interaction situations. Humans are able to remember people seen for a short moment in time and apply this knowledge directly in their engagement in conversation. In order to equip a robot with capabilities to recall human interlocutors and to provide user- aware services, we adopt human-human interaction schemes to propose a face memory on the basis of active appearance models integrated with the active memory architecture. This paper presents the concept of the interactive face memory, the applied recognition algorithms, and their embedding into the robot?s system architecture. Performance measures are discussed for general face databases as well as scenario-specific datasets.

    @inproceedings{lirolem6938,
               month = {May},
              author = {Marc Hanheide and Sebastian Wrede and Christian Lang and Gerhard Sagerer},
                note = {In order to provide personalized services and to
    develop human-like interaction capabilities robots need to rec-
    ognize their human partner. Face recognition has been studied
    in the past decade exhaustively in the context of security systems
    and with significant progress on huge datasets. However, these
    capabilities are not in focus when it comes to social interaction
    situations. Humans are able to remember people seen for a
    short moment in time and apply this knowledge directly in
    their engagement in conversation. In order to equip a robot with
    capabilities to recall human interlocutors and to provide user-
    aware services, we adopt human-human interaction schemes to
    propose a face memory on the basis of active appearance models
    integrated with the active memory architecture. This paper
    presents the concept of the interactive face memory, the applied
    recognition algorithms, and their embedding into the robot?s
    system architecture. Performance measures are discussed for
    general face databases as well as scenario-specific datasets.},
           booktitle = {IEEE International Conference on Robotics and Automation},
              editor = {B. Gottfried and H. Aghajan},
               title = {Who am I talking with? A face memory for social robots},
           publisher = {IEEE},
                year = {2008},
               pages = {3660--3665},
            keywords = {ARRAY(0x7fdc7816d530)},
                 url = {http://eprints.lincoln.ac.uk/6938/},
            abstract = {In order to provide personalized services and to
    develop human-like interaction capabilities robots need to rec-
    ognize their human partner. Face recognition has been studied
    in the past decade exhaustively in the context of security systems
    and with significant progress on huge datasets. However, these
    capabilities are not in focus when it comes to social interaction
    situations. Humans are able to remember people seen for a
    short moment in time and apply this knowledge directly in
    their engagement in conversation. In order to equip a robot with
    capabilities to recall human interlocutors and to provide user-
    aware services, we adopt human-human interaction schemes to
    propose a face memory on the basis of active appearance models
    integrated with the active memory architecture. This paper
    presents the concept of the interactive face memory, the applied
    recognition algorithms, and their embedding into the robot?s
    system architecture. Performance measures are discussed for
    general face databases as well as scenario-specific datasets.}
    }
  • M. Hanheide and G. Sagerer, “Active memory-based interaction strategies for learning-enabling behaviors,” in RO-MAN 2008 – The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008, pp. 101-106.
    [BibTeX] [Abstract] [EPrints]

    Despite increasing efforts in the field of social robotics and interactive systems integrated and fully autonomous robots which are capable of learning from interaction with inexperienced and non-expert users are still a rarity. However, in order to tackle the challenge of learning by interaction robots need to be equipped with a set of basic behaviors and abilities which have to be coupled and combined in a flexible manner. This paper presents how a recently proposed information-driven integration concept termed ?active memory? is adopted to realize learning-enabling behaviors for a domestic robot. These behaviors enable it to (i) learn about its environment, (ii) interact with several humans simultaneously, and (iii) couple learning and interaction tightly. The basic interaction strategies on the basis of information exchange through the active memory are presented. A brief discussion of results obtained from live user trials with inexperienced users in a home tour scenario underpin the relevance and appropriateness of the described concepts.

    @inproceedings{lirolem6928,
               month = {August},
              author = {Marc Hanheide and Gerhard Sagerer},
                note = {Despite increasing efforts in the field of social
    robotics and interactive systems integrated and fully autonomous
    robots which are capable of learning from interaction
    with inexperienced and non-expert users are still a rarity.
    However, in order to tackle the challenge of learning by
    interaction robots need to be equipped with a set of basic
    behaviors and abilities which have to be coupled and combined
    in a flexible manner. This paper presents how a recently
    proposed information-driven integration concept termed ?active
    memory? is adopted to realize learning-enabling behaviors for
    a domestic robot. These behaviors enable it to (i) learn about its
    environment, (ii) interact with several humans simultaneously,
    and (iii) couple learning and interaction tightly. The basic
    interaction strategies on the basis of information exchange
    through the active memory are presented. A brief discussion
    of results obtained from live user trials with inexperienced
    users in a home tour scenario underpin the relevance and
    appropriateness of the described concepts.},
           booktitle = {RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Active memory-based interaction strategies for learning-enabling behaviors},
           publisher = {IEEE},
                year = {2008},
               pages = {101--106},
            keywords = {ARRAY(0x7fdc78193678)},
                 url = {http://eprints.lincoln.ac.uk/6928/},
            abstract = {Despite increasing efforts in the field of social
    robotics and interactive systems integrated and fully autonomous
    robots which are capable of learning from interaction
    with inexperienced and non-expert users are still a rarity.
    However, in order to tackle the challenge of learning by
    interaction robots need to be equipped with a set of basic
    behaviors and abilities which have to be coupled and combined
    in a flexible manner. This paper presents how a recently
    proposed information-driven integration concept termed ?active
    memory? is adopted to realize learning-enabling behaviors for
    a domestic robot. These behaviors enable it to (i) learn about its
    environment, (ii) interact with several humans simultaneously,
    and (iii) couple learning and interaction tightly. The basic
    interaction strategies on the basis of information exchange
    through the active memory are presented. A brief discussion
    of results obtained from live user trials with inexperienced
    users in a home tour scenario underpin the relevance and
    appropriateness of the described concepts.}
    }
  • L. Jun and T. Duckett, “Some practical aspects on incremental training of RBF network for robot behavior learning,” in 2008 7th World Congress on Intelligent Control and Automation, 2008, pp. 2001-2006.
    [BibTeX] [Abstract] [EPrints]

    The radial basis function (RBF) neural network with Gaussian activation function and least- mean squares (LMS) learning algorithm is a popular function approximator widely used in many applications due to its simplicity, robustness, optimal approximation, etc.. In practice, however, making the RBF network (and other neural networks) work well can sometimes be more of an art than a science, especially concerning parameter selection and adjustment. In this paper, we address three issues, namely the normalization of raw sensory-motor data, the choice of receptive fields for the RBFs, and the adjustment of the learning rate when training the RBF network in incremental learning fashion for robot behavior learning, where the RBF network is used to map sensory inputs to motor outputs. Though these issues are less theoretical and scientific, they are more practical, and sometimes more crucial for the application of the RBF network to the problems at hand. We believe that being aware of these practical issues can enable a better use of the RBF network in the real-world application. 1Introduction The radial basis function (RBF) network [3, 16] has found a wide range of application due to its simplicity, local learning, robustness, optimal approximation, etc.. For example, in an autonomous robot control system, the RBF network can be applied to directly map the sensory inputs to motor outputs [23, 21, 9, 15] for acquiring the required behaviors. However, in these successful applications there has been much less description on how to choose and adjust the parameters and why they are adjusted so for the applications of interest. In this paper, we address three practical aspects for incremental training of the RBF network, namely normalizing the raw sensor input, choosing the receptive fields of RBFs, and adjusting the learning rate for robot behavior learning. We restrict our investigation of these issues to the following situations: First of all, for simplicity of notation, consider a multi-input and single-output (MISO) system in which x = [x1,x2,…,xm]Tis an m-dimensional input vector, and y the scalar output. The RBF neural network can be defined as: ? y = F(x) = K ? k=1 wk\ensuremath\phik(x) + b,\ensuremath\phik(x) = e? 1 (\ensuremath\gamma\ensuremath\sigmak)2?x??k?2 for k = 1,2,…,K, (1) where wkis the weight of k-th Gaussian function \ensuremath\phik(x), ?k= [?k1,?k2,…,?km]Tis the m-dimensional position vector of k-th radial basis function, and \ensuremath\sigmakis receptive field of k-th radial basis function. In addition, K is the number of the RBFs, b is the bias, and \ensuremath\gamma is the optimal factor introduced for optimising the receptive field \ensuremath\sigmak, as in [20]. We assume that the number of RBFs K could either be designated in advance before training, thus clustering algorithms like McQueen?s K-means, or Kohonen?s SOM [10] can be used for determining the position vector ?k; or it could be automatically obtained in real time during the training process by using dynamically adaptive clustering algorithms such as GWR [14]. In both cases, the receptive field \ensuremath\sigmakcan be determined by some empirical estimation method (see section 3). We also assume that the RBF network?s weights wkand bias b are updated by the least mean squares (LMS) algorithm, as wk? wk+ \ensuremath\etat(yt? ? y)\ensuremath\phik(xt), for k = 1,2,…,K,b ? b + \ensuremath\etat(yp? ? y), (2) 1

    @inproceedings{lirolem12853,
           booktitle = {2008 7th World Congress on Intelligent Control and Automation},
               month = {June},
               title = {Some practical aspects on incremental training of RBF network for robot behavior learning},
              author = {Li  Jun and Tom Duckett},
           publisher = {IEEE},
                year = {2008},
               pages = {2001--2006},
            keywords = {ARRAY(0x7fdc78142dd8)},
                 url = {http://eprints.lincoln.ac.uk/12853/},
            abstract = {The radial basis function (RBF) neural network with Gaussian activation function and least- mean squares (LMS) learning algorithm is a popular function approximator widely used in many applications due to its simplicity, robustness, optimal approximation, etc.. In practice, however, making the RBF network (and other neural networks) work well can sometimes be more of an art than a science, especially concerning parameter selection and adjustment. In this paper, we address three issues, namely the normalization of raw sensory-motor data, the choice of receptive fields for the RBFs, and the adjustment of the learning rate when training the RBF network in incremental learning fashion for robot behavior learning, where the RBF network is used to map sensory inputs to motor outputs. Though these issues are less theoretical and scientific, they are more practical, and sometimes more crucial for the application of the RBF network to the problems at hand. We believe that being aware of these practical issues can enable a better use of the RBF network in the real-world application. 1Introduction The radial basis function (RBF) network [3, 16] has found a wide range of application due to its simplicity, local learning, robustness, optimal approximation, etc.. For example, in an autonomous robot control system, the RBF network can be applied to directly map the sensory inputs to motor outputs [23, 21, 9, 15] for acquiring the required behaviors. However, in these successful applications there has been much less description on how to choose and adjust the parameters and why they are adjusted so for the applications of interest. In this paper, we address three practical aspects for incremental training of the RBF network, namely normalizing the raw sensor input, choosing the receptive fields of RBFs, and adjusting the learning rate for robot behavior learning. We restrict our investigation of these issues to the following situations: First of all, for simplicity of notation, consider a multi-input and single-output (MISO) system in which x = [x1,x2,...,xm]Tis an m-dimensional input vector, and y the scalar output. The RBF neural network can be defined as: ? y = F(x) = K ? k=1 wk{\ensuremath{\phi}}k(x) + b,{\ensuremath{\phi}}k(x) = e? 1 ({\ensuremath{\gamma}}{\ensuremath{\sigma}}k)2?x??k?2 for k = 1,2,...,K, (1) where wkis the weight of k-th Gaussian function {\ensuremath{\phi}}k(x), ?k= [?k1,?k2,...,?km]Tis the m-dimensional position vector of k-th radial basis function, and {\ensuremath{\sigma}}kis receptive field of k-th radial basis function. In addition, K is the number of the RBFs, b is the bias, and {\ensuremath{\gamma}} is the optimal factor introduced for optimising the receptive field {\ensuremath{\sigma}}k, as in [20]. We assume that the number of RBFs K could either be designated in advance before training, thus clustering algorithms like McQueen?s K-means, or Kohonen?s SOM [10] can be used for determining the position vector ?k; or it could be automatically obtained in real time during the training process by using dynamically adaptive clustering algorithms such as GWR [14]. In both cases, the receptive field {\ensuremath{\sigma}}kcan be determined by some empirical estimation method (see section 3). We also assume that the RBF network?s weights wkand bias b are updated by the least mean squares (LMS) algorithm, as wk? wk+ {\ensuremath{\eta}}t(yt? ? y){\ensuremath{\phi}}k(xt), for k = 1,2,...,K,b ? b + {\ensuremath{\eta}}t(yp? ? y), (2) 1}
    }
  • K. Jüngling, M. Arens, M. Hanheide, and G. Sagerer, “Fusion of perceptual processes for real-time object tracking,” in 11th International Conference on Information Fusion, 2008, pp. 1-8.
    [BibTeX] [Abstract] [EPrints]

    This paper introduces a generic architecture for the fusion of perceptual processes and its application in real-time object tracking. In this architecture, the well known anchoring approach is, by integrating techniques from information fusion, extended to multi-modal anchoring so as to be applicable in a multi-process environment. The system architecture is designed to be applicable in a generic way, independent of specific application domains and of the characteristics of the underlying sensory processes. It is shown that, by combining multiple independent video-based detection methods, the generic multi-modal anchoring approach can be successfully employed for real-time person tracking in difficult environments

    @inproceedings{lirolem6932,
               month = {April},
              author = {Kai J{\"u}ngling and Michael Arens and Marc Hanheide and Gerhard Sagerer},
                note = {This paper introduces a generic architecture for the fusion of perceptual processes and its application in real-time object tracking. In this architecture, the well known anchoring approach is, by integrating techniques from information fusion, extended to multi-modal anchoring so as to be applicable in a multi-process environment. The system architecture is designed to be applicable in a generic way, independent of specific application domains and of the characteristics of the underlying sensory processes. It is shown that, by combining multiple independent video-based detection methods, the generic multi-modal anchoring approach can be successfully employed for real-time person tracking in difficult environments},
           booktitle = {11th International Conference on Information Fusion},
              editor = {B. Gottfried and H. Aghajan},
               title = {Fusion of perceptual processes for real-time object tracking},
           publisher = {IEEE},
                year = {2008},
               pages = {1--8},
            keywords = {ARRAY(0x7fdc7816c098)},
                 url = {http://eprints.lincoln.ac.uk/6932/},
            abstract = {This paper introduces a generic architecture for the fusion of perceptual processes and its application in real-time object tracking. In this architecture, the well known anchoring approach is, by integrating techniques from information fusion, extended to multi-modal anchoring so as to be applicable in a multi-process environment. The system architecture is designed to be applicable in a generic way, independent of specific application domains and of the characteristics of the underlying sensory processes. It is shown that, by combining multiple independent video-based detection methods, the generic multi-modal anchoring approach can be successfully employed for real-time person tracking in difficult environments}
    }
  • M. Lohse and M. Hanheide, “Evaluating a social home tour robot applying heuristics,” in Robots as Social Actors Workshop: International Symposium on Robot and Human Interactive Communication (RO-MAN 08), 2008, pp. 1584-1589.
    [BibTeX] [Abstract] [EPrints]

    In a society that keeps getting closer in touch with social robots it is very important to include potential users throughout the design of the systems. This is an important rationale to build robots that provide services and assistance in a socially acceptable way and influence societies in a positive way. In the process, methods are needed to rate the robot interaction performance. We present a multimodal corpus of naïve users interacting with an autonomously operating system. It comprises data that, to our conviction, reveal a lot about human-robot interaction (HRI) in general and social acceptance, in particular. In both, the evaluation and the design process we took into account Clarkson and Arkin’s heuristics for HRI (developed by adapting Nielsen’s and Scholtz’ heuristics to robotics) 1. We discuss exemplary results to show the use of heuristics in the design of socially acceptable robots.

    @inproceedings{lirolem6930,
               month = {August},
              author = {Manja Lohse and Marc Hanheide},
                note = {In a society that keeps getting closer in touch with social robots it is very important to include potential users throughout the design of the systems. This is an important rationale to build robots that provide services and assistance in a socially acceptable way and influence societies in a positive way. In the process, methods are needed to rate the robot interaction performance. We present a multimodal corpus of na{\"i}ve users interacting with an autonomously operating system. It comprises data that, to our conviction, reveal a lot about human-robot interaction (HRI) in general and social acceptance, in particular. In both, the evaluation and the design process we took into account Clarkson and Arkin's heuristics for HRI (developed by adapting Nielsen's and Scholtz' heuristics to robotics) 1. We discuss exemplary results to show the use of heuristics in the design of socially acceptable robots.},
           booktitle = {Robots as Social Actors Workshop: International Symposium on Robot and Human Interactive Communication (RO-MAN 08)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Evaluating a social home tour robot applying heuristics},
               pages = {1584--1589},
                year = {2008},
            keywords = {ARRAY(0x7fdc78013e30)},
                 url = {http://eprints.lincoln.ac.uk/6930/},
            abstract = {In a society that keeps getting closer in touch with social robots it is very important to include potential users throughout the design of the systems. This is an important rationale to build robots that provide services and assistance in a socially acceptable way and influence societies in a positive way. In the process, methods are needed to rate the robot interaction performance. We present a multimodal corpus of na{\"i}ve users interacting with an autonomously operating system. It comprises data that, to our conviction, reveal a lot about human-robot interaction (HRI) in general and social acceptance, in particular. In both, the evaluation and the design process we took into account Clarkson and Arkin's heuristics for HRI (developed by adapting Nielsen's and Scholtz' heuristics to robotics) 1. We discuss exemplary results to show the use of heuristics in the design of socially acceptable robots.}
    }
  • M. Lohse, M. Hanheide, B. Wrede, M. L. Walters, K. L. Koay, D. S. Syrdal, A. Green, H. Huttenrauch, K. Dautenhahn, G. Sagerer, and K. Severinson-Eklundh, “Evaluating extrovert and introvert behaviour of a domestic robot — a video study,” in The 17th IEEE International Symposium on Robot and Human Interactive Communication, 2008, pp. 488-493.
    [BibTeX] [Abstract] [EPrints]

    Human-robot interaction (HRI) research is here presented into social robots that have to be able to interact with inexperienced users. In the design of these robots many research findings of human-human interaction and human-computer interaction are adopted but the direct applicability of these theories is limited because a robot is different from both humans and computers. Therefore, new methods have to be developed in HRI in order to build robots that are suitable for inexperienced users. In this paper we present a video study we conducted employing our robot BIRON (Bielefeld robot companion) which is designed for use in domestic environments. Subjects watched the system during the interaction with a human and rated two different robot behaviours (extrovert and introvert). The behaviours differed regarding verbal output and person following of the robot. Aiming to improve human-robot interaction, participantspsila ratings of the behaviours were evaluated and compared.

    @inproceedings{lirolem6931,
               month = {August},
              author = {Manja Lohse and Marc Hanheide and Britta Wrede and Michael L. Walters and Kheng Lee Koay and Dag Sverre Syrdal and Anders Green and Helge Huttenrauch and Kerstin Dautenhahn and Gerhard Sagerer and Kerstin Severinson-Eklundh},
                note = {Human-robot interaction (HRI) research is here presented into social robots that have to be able to interact with inexperienced users. In the design of these robots many research findings of human-human interaction and human-computer interaction are adopted but the direct applicability of these theories is limited because a robot is different from both humans and computers. Therefore, new methods have to be developed in HRI in order to build robots that are suitable for inexperienced users. In this paper we present a video study we conducted employing our robot BIRON (Bielefeld robot companion) which is designed for use in domestic environments. Subjects watched the system during the interaction with a human and rated two different robot behaviours (extrovert and introvert). The behaviours differed regarding verbal output and person following of the robot. Aiming to improve human-robot interaction, participantspsila ratings of the behaviours were evaluated and compared.},
           booktitle = {The 17th IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Evaluating extrovert and introvert behaviour of a domestic robot {--} a video study},
           publisher = {IEEE},
                year = {2008},
               pages = {488--493},
            keywords = {ARRAY(0x7fdc7804d040)},
                 url = {http://eprints.lincoln.ac.uk/6931/},
            abstract = {Human-robot interaction (HRI) research is here presented into social robots that have to be able to interact with inexperienced users. In the design of these robots many research findings of human-human interaction and human-computer interaction are adopted but the direct applicability of these theories is limited because a robot is different from both humans and computers. Therefore, new methods have to be developed in HRI in order to build robots that are suitable for inexperienced users. In this paper we present a video study we conducted employing our robot BIRON (Bielefeld robot companion) which is designed for use in domestic environments. Subjects watched the system during the interaction with a human and rated two different robot behaviours (extrovert and introvert). The behaviours differed regarding verbal output and person following of the robot. Aiming to improve human-robot interaction, participantspsila ratings of the behaviours were evaluated and compared.}
    }
  • T. Martinez-Marin and T. Duckett, “Learning visual docking for non-holonomic autonomous vehicles,” in The Intelligent Vehicles 2008 Symposium (IV08), 2008.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the ?adjoining property?, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm.

    @inproceedings{lirolem1683,
           booktitle = {The Intelligent Vehicles 2008 Symposium  (IV08)},
               month = {June},
               title = {Learning visual docking for non-holonomic autonomous vehicles},
              author = {Tomas Martinez-Marin and Tom Duckett},
                year = {2008},
                note = {This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the ?adjoining property?, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal
    behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm.},
            keywords = {ARRAY(0x7fdc780512a8)},
                 url = {http://eprints.lincoln.ac.uk/1683/},
            abstract = {This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the ?adjoining property?, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal
    behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm.}
    }
  • M. Persson, T. Duckett, and A. Lilienthal, “Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping,” Robotics and autonomous systems, vol. 56, iss. 6, pp. 483-492, 2008.
    [BibTeX] [Abstract] [EPrints]

    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

    @article{lirolem1682,
              volume = {56},
              number = {6},
               month = {June},
              author = {Martin Persson and Tom Duckett and Achim Lilienthal},
                note = {This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.},
               title = {Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping},
           publisher = {Elsevier},
                year = {2008},
             journal = {Robotics and autonomous systems},
               pages = {483--492},
            keywords = {ARRAY(0x7fdc7811e5a8)},
                 url = {http://eprints.lincoln.ac.uk/1682/},
            abstract = {This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.}
    }
  • A. Rabie, C. Lang, M. Hanheide, M. Castrillon-Santana, and G. Sagerer, “Automatic initialization for facial analysis in interactive robotics,” in 6th International Conference, ICVS 2008, 2008, pp. 517-526.
    [BibTeX] [Abstract] [EPrints]

    The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot?s point of view.

    @inproceedings{lirolem6929,
               month = {May},
              author = {Ahmad Rabie and Christian Lang and Marc Hanheide and Modesto Castrillon-Santana and Gerhard Sagerer},
                note = {The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot?s point of view.},
           booktitle = {6th International Conference, ICVS 2008},
              editor = {B. Gottfried and H. Aghajan},
               title = {Automatic initialization for facial analysis in interactive robotics},
           publisher = {Springer},
                year = {2008},
               pages = {517--526},
            keywords = {ARRAY(0x7fdc7812e768)},
                 url = {http://eprints.lincoln.ac.uk/6929/},
            abstract = {The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot?s point of view.}
    }
  • T. P. Spexard, M. Hanheide, S. Li, and B. Wrede, “Oops, something is wrong – error detection and recovery for advanced human-robot-interaction,” in ICRA Workshop on Social Interaction with Intelligent Indoor Robots (2008), 2008.
    [BibTeX] [Abstract] [EPrints]

    A matter of course for the researchers and developers of state-of-the-art technology for human-computer- or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong robustness against internal and external errors or user-dependent application errors. Especially when creating service robots for a variety of applications or robots for accompanying humans in everyday situations sufficient error robustness is crucial for acceptance by users. But experience unveils that operating such systems under real world conditions with unexperienced users is an extremely challenging task which still is not solved satisfactorily. In this paper we will present an approach for handling both internal errors and application errors within an integrated system capable of performing extended HRI on different robotic platforms and in unspecified surroundings like a real world apartment. Based on the gathered experience from user studies and evaluating integrated systems in the real world, we implemented several ways to generalize and handle unexpected situations. Adding such a kind of error awareness to HRI systems in cooperation with the interaction partner avoids to get stuck in an unexpected situation or state and handle mode confusion. Instead of shouldering the enormous effort to account for all possible problems, this paper proposes a more general solution and underpins this with findings from naive user studies. This enhancement is crucial for the development of a new generation of robots as despite diligent preparations might be made, no one can predict how an interaction with a robotic system will develop and which kind of environment it has to cope with.

    @inproceedings{lirolem6935,
           booktitle = {ICRA Workshop on Social Interaction with Intelligent Indoor Robots (2008)},
              editor = {B. Gottfried and H. Aghajan},
               month = {May},
               title = {Oops, something is wrong - error detection and recovery for advanced human-robot-interaction},
              author = {Thorsten P. Spexard and Marc Hanheide and Shuyin Li and Britta Wrede},
                year = {2008},
                note = {A matter of course for the researchers and developers of state-of-the-art technology for human-computer- or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong robustness against internal and external errors or user-dependent application errors. Especially when creating service robots for a variety of applications or robots for accompanying humans in everyday situations sufficient error robustness is crucial for acceptance by users. But experience unveils that operating such systems under real world conditions with unexperienced users is an extremely challenging task which still is not solved satisfactorily. In this paper we will present an approach for handling both internal errors and application errors within an integrated system capable of performing extended HRI on different robotic platforms and in unspecified surroundings like a real world apartment. Based on the gathered experience from user studies and evaluating integrated systems in the real world, we implemented several ways to generalize and handle unexpected situations. Adding such a kind of error awareness to HRI systems in cooperation with the interaction partner avoids to get stuck in an unexpected situation or state and handle mode confusion. Instead of shouldering the enormous effort to account for all possible problems, this paper proposes a more general solution and underpins this with findings from naive user studies. This enhancement is crucial for the development of a new generation of robots as despite diligent preparations might be made, no one can predict how an interaction with a robotic system will develop and which kind of environment it has to cope with.},
            keywords = {ARRAY(0x7fdc78176ae8)},
                 url = {http://eprints.lincoln.ac.uk/6935/},
            abstract = {A matter of course for the researchers and developers of state-of-the-art technology for human-computer- or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong robustness against internal and external errors or user-dependent application errors. Especially when creating service robots for a variety of applications or robots for accompanying humans in everyday situations sufficient error robustness is crucial for acceptance by users. But experience unveils that operating such systems under real world conditions with unexperienced users is an extremely challenging task which still is not solved satisfactorily. In this paper we will present an approach for handling both internal errors and application errors within an integrated system capable of performing extended HRI on different robotic platforms and in unspecified surroundings like a real world apartment. Based on the gathered experience from user studies and evaluating integrated systems in the real world, we implemented several ways to generalize and handle unexpected situations. Adding such a kind of error awareness to HRI systems in cooperation with the interaction partner avoids to get stuck in an unexpected situation or state and handle mode confusion. Instead of shouldering the enormous effort to account for all possible problems, this paper proposes a more general solution and underpins this with findings from naive user studies. This enhancement is crucial for the development of a new generation of robots as despite diligent preparations might be made, no one can predict how an interaction with a robotic system will develop and which kind of environment it has to cope with.}
    }
  • A. Swadzba, A. Vollmer, M. Hanheide, and S. Wachsmuth, “Reducing noise and redundancy in registered range data for planar surface extraction,” in 19th International Conference on Pattern Recognition, 2008.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a new method for detecting and merging redundant points in registered range data. Given a global representation from sequences of 3D points, the points are projected onto a virtual image plane computed from the intrinsic parameters of the sensor. Candidates for redundancy are collected per pixel which then are clustered locally via region growing and replaced by the cluster?s mean value. As data is provided in a certain manner defined by camera characteristics, this processing step preserves the structural information of the data. For evaluation, our approach is compared to two other algorithms. Applied to two dif- ferent sequences, it is shown that the presented method gives smooth results within planar regions of the point clouds by successfully reducing noise and redundancy and thus improves registered range data.

    @inproceedings{lirolem6936,
               month = {December},
              author = {Agnes Swadzba and Anna-Lisa Vollmer and Marc Hanheide and Sven Wachsmuth},
                note = {This paper presents a new method for detecting
    and merging redundant points in registered range data.
    Given a global representation from sequences of 3D
    points, the points are projected onto a virtual image
    plane computed from the intrinsic parameters of the
    sensor. Candidates for redundancy are collected per
    pixel which then are clustered locally via region growing and replaced by the cluster?s mean value. As data is
    provided in a certain manner defined by camera characteristics, this processing step preserves the structural
    information of the data. For evaluation, our approach is
    compared to two other algorithms. Applied to two dif-
    ferent sequences, it is shown that the presented method
    gives smooth results within planar regions of the point
    clouds by successfully reducing noise and redundancy
    and thus improves registered range data.},
           booktitle = {19th International Conference on Pattern Recognition},
              editor = {B. Gottfried and H. Aghajan},
               title = {Reducing noise and redundancy in registered range data for planar surface extraction},
           publisher = {IEEE / The International Association for Pattern Recognition (IAPR)},
                year = {2008},
            keywords = {ARRAY(0x7fdc78150dd0)},
                 url = {http://eprints.lincoln.ac.uk/6936/},
            abstract = {This paper presents a new method for detecting
    and merging redundant points in registered range data.
    Given a global representation from sequences of 3D
    points, the points are projected onto a virtual image
    plane computed from the intrinsic parameters of the
    sensor. Candidates for redundancy are collected per
    pixel which then are clustered locally via region growing and replaced by the cluster?s mean value. As data is
    provided in a certain manner defined by camera characteristics, this processing step preserves the structural
    information of the data. For evaluation, our approach is
    compared to two other algorithms. Applied to two dif-
    ferent sequences, it is shown that the presented method
    gives smooth results within planar regions of the point
    clouds by successfully reducing noise and redundancy
    and thus improves registered range data.}
    }
  • J. Wessnitzer, T. Haferlach, M. Mangan, and B. Webb, “Path integration using a model of e-vector orientation coding in the insect brain: reply to Vickerstaff and Di Paolo,” Adaptive Behavior, vol. 16, iss. 4, pp. 277-281, 2008.
    [BibTeX] [Abstract] [EPrints]

    In their response to our article (Haferlach, Wessnitzer, Mangan, & Webb, 2007), Vickerstaff and Di Paolo correctly note that the response function of input units used to evolve our network was the same cos(ha ? hp) as that used by Vickerstaff and Di Paolo (2005), and resembles the POL neuron arrangement in insects (Labhart & Meyer, 2002) only in the use of three instead of two such units. However it is important to note that our evolved network structure–which maintains a population encoding of the home vector over a set of memory neurons that integrate the input coming from the direction cells–is in fact generalizable to a wide range of direction cell response functions. To demonstrate this point, we here show the results of integrating this network with a very recent model of e-Vector orientation coding in the central complex of the insect brain (Sakura, Lambrinos, & Labhart, 2008)

    @article{lirolem23578,
              volume = {16},
              number = {4},
               month = {August},
              author = {Jan Wessnitzer and Thomas Haferlach and Michael Mangan and Barbara Webb},
               title = {Path integration using a model of e-vector orientation coding in the insect brain: reply to Vickerstaff and Di Paolo},
           publisher = {Sage Publications, Inc.},
                year = {2008},
             journal = {Adaptive Behavior},
               pages = {277--281},
            keywords = {ARRAY(0x7fdc78151010)},
                 url = {http://eprints.lincoln.ac.uk/23578/},
            abstract = {In their response to our article (Haferlach, Wessnitzer,
    Mangan, \& Webb, 2007), Vickerstaff and Di Paolo
    correctly note that the response function of input units
    used to evolve our network was the same cos(ha ? hp)
    as that used by Vickerstaff and Di Paolo (2005), and
    resembles the POL neuron arrangement in insects
    (Labhart \& Meyer, 2002) only in the use of three
    instead of two such units. However it is important to
    note that our evolved network structure{--}which maintains
    a population encoding of the home vector over a
    set of memory neurons that integrate the input coming
    from the direction cells{--}is in fact generalizable to a
    wide range of direction cell response functions. To
    demonstrate this point, we here show the results of
    integrating this network with a very recent model of
    e-Vector orientation coding in the central complex of
    the insect brain (Sakura, Lambrinos, \& Labhart,
    2008)}
    }
  • J. Wessnitzer, M. Mangan, and B. Webb, “Place memory in crickets,” Proceedings of the Royal Society B: Biological Sciences, vol. 275, iss. 1637, pp. 915-921, 2008.
    [BibTeX] [Abstract] [EPrints]

    Certain insect species are known to relocate nest or food sites using landmarks, but the generality of this capability among insects, and whether insect place memory can be used in novel task settings, is not known. We tested the ability of crickets to use surrounding visual cues to relocate an invisible target in an analogue of the Morris water maze, a standard paradigm for spatial memory tests on rodents. Adult female Gryllus bimaculatus were released into an arena with a floor heated to an aversive temperature, with one hidden cool spot. Over 10 trials, the time taken to find the cool spot decreased significantly. The best performance was obtained when a natural scene was provided on the arena walls. Animals can relocate the position from novel starting points. When the scene is rotated, they preferentially approach the fictive target position corresponding to the rotation. We note that this navigational capability does not necessarily imply the animal has an internal spatial representation.

    @article{lirolem23573,
              volume = {275},
              number = {1637},
               month = {April},
              author = {Jan Wessnitzer and Michael Mangan and Barbara Webb},
               title = {Place memory in crickets},
           publisher = {Royal Society},
                year = {2008},
             journal = {Proceedings of the Royal Society B: Biological Sciences},
               pages = {915--921},
            keywords = {ARRAY(0x7fdc7804c7a0)},
                 url = {http://eprints.lincoln.ac.uk/23573/},
            abstract = {Certain insect species are known to relocate nest or food sites using landmarks, but the generality of this capability among insects, and whether insect place memory can be used in novel task settings, is not known. We tested the ability of crickets to use surrounding visual cues to relocate an invisible target in an analogue of the Morris water maze, a standard paradigm for spatial memory tests on rodents. Adult female Gryllus bimaculatus were released into an arena with a floor heated to an aversive temperature, with one hidden cool spot. Over 10 trials, the time taken to find the cool spot decreased significantly. The best performance was obtained when a natural scene was provided on the arena walls. Animals can relocate the position from novel starting points. When the scene is rotated, they preferentially approach the fictive target position corresponding to the rotation. We note that this navigational capability does not necessarily imply the animal has an internal spatial representation.}
    }
  • F. Yuan, M. Hanheide, and G. Sagerer, “Spatial context-aware person-following for a domestic robot,” in International Workshop on Cognition for Technical Systems, 2008.
    [BibTeX] [Abstract] [EPrints]

    Domestic robots are in the focus of research in terms of service providers in households and even as robotic companion that share the living space with humans. A major capability of mobile domestic robots that is joint exploration of space. One challenge to deal with this task is how could we let the robots move in space in reasonable, socially acceptable ways so that it will support interaction and communication as a part of the joint exploration. As a step towards this challenge, we have developed a context-aware following behav- ior considering these social aspects and applied these together with a multi-modal person-tracking method to switch between three basic following approaches, namely direction-following, path-following and parallel-following. These are derived from the observation of human-human following schemes and are activated depending on the current spatial context (e.g. free space) and the relative position of the interacting human. A combination of the elementary behaviors is performed in real time with our mobile robot in different environments. First experimental results are provided to demonstrate the practicability of the proposed approach.

    @inproceedings{lirolem6937,
           booktitle = {International Workshop on Cognition for Technical Systems},
              editor = {B. Gottfried and H. Aghajan},
               month = {December},
               title = {Spatial context-aware person-following for a domestic robot},
              author = {Fang Yuan and Marc Hanheide and Gerhard Sagerer},
                year = {2008},
                note = {Domestic robots are in the focus of research in
    terms of service providers in households and even as robotic
    companion that share the living space with humans. A major
    capability of mobile domestic robots that is joint exploration
    of space. One challenge to deal with this task is how could we
    let the robots move in space in reasonable, socially acceptable
    ways so that it will support interaction and communication
    as a part of the joint exploration. As a step towards this
    challenge, we have developed a context-aware following behav-
    ior considering these social aspects and applied these together
    with a multi-modal person-tracking method to switch between
    three basic following approaches, namely direction-following,
    path-following and parallel-following. These are derived from
    the observation of human-human following schemes and are
    activated depending on the current spatial context (e.g. free
    space) and the relative position of the interacting human.
    A combination of the elementary behaviors is performed in
    real time with our mobile robot in different environments.
    First experimental results are provided to demonstrate the
    practicability of the proposed approach.},
            keywords = {ARRAY(0x7fdc7805fb98)},
                 url = {http://eprints.lincoln.ac.uk/6937/},
            abstract = {Domestic robots are in the focus of research in
    terms of service providers in households and even as robotic
    companion that share the living space with humans. A major
    capability of mobile domestic robots that is joint exploration
    of space. One challenge to deal with this task is how could we
    let the robots move in space in reasonable, socially acceptable
    ways so that it will support interaction and communication
    as a part of the joint exploration. As a step towards this
    challenge, we have developed a context-aware following behav-
    ior considering these social aspects and applied these together
    with a multi-modal person-tracking method to switch between
    three basic following approaches, namely direction-following,
    path-following and parallel-following. These are derived from
    the observation of human-human following schemes and are
    activated depending on the current spatial context (e.g. free
    space) and the relative position of the interacting human.
    A combination of the elementary behaviors is performed in
    real time with our mobile robot in different environments.
    First experimental results are provided to demonstrate the
    practicability of the proposed approach.}
    }

2007

  • N. Bellotto and H. Hu, “Multisensor data fusion for joint people tracking and identification with a service robot,” in IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), 2007, pp. 1494-1499.
    [BibTeX] [Abstract] [EPrints]

    Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution.

    @inproceedings{lirolem2099,
           booktitle = {IEEE Int. Conf. on Robotics and Biomimetics (ROBIO)},
               title = {Multisensor data fusion for joint people tracking and identification with a service robot},
              author = {Nicola Bellotto and Huosheng Hu},
                year = {2007},
               pages = {1494--1499},
                note = {Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution.},
            keywords = {ARRAY(0x7fdc7801f190)},
                 url = {http://eprints.lincoln.ac.uk/2099/},
            abstract = {Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution.}
    }
  • G. Cielniak, T. Duckett, and A. J. Lilienthal, “Improved data association and occlusion handling for vision-based people tracking by mobile robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2007, pp. 3436-3441.
    [BibTeX] [Abstract] [EPrints]

    This paper presents an approach for tracking multiple persons using a combination of colour and thermal vision sensors on a mobile robot. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is then incorporated into the tracker.

    @inproceedings{lirolem1848,
               month = {October},
              author = {Grzegorz Cielniak and Tom Duckett and J. Achim Lilienthal},
                note = {This paper presents an approach for tracking multiple persons using a combination of colour and thermal vision sensors on a mobile robot. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is then incorporated into the tracker.},
           booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               title = {Improved data association and occlusion handling for vision-based people tracking by mobile robots},
           publisher = {IEEE},
               pages = {3436--3441},
                year = {2007},
            keywords = {ARRAY(0x7fdc78172ae0)},
                 url = {http://eprints.lincoln.ac.uk/1848/},
            abstract = {This paper presents an approach for tracking multiple persons using a combination of colour and thermal vision sensors on a mobile robot. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is then incorporated into the tracker.}
    }
  • T. Haferlach, J. Wessnitzer, M. Mangan, and B. Webb, “Evolving a neural model of insect path integration,” Adaptive Behavior, vol. 15, iss. 3, pp. 273-287, 2007.
    [BibTeX] [Abstract] [EPrints]

    Path integration is an important navigation strategy in many animal species. We use a genetic algorithm to evolve a novel neural model of path integration, based on input from cells that encode the heading of the agent in a manner comparable to the polarization-sensitive interneurons found in insects. The home vector is encoded as a population code across a circular array of cells that integrate this input. This code can be used to control return to the home position. We demonstrate the capabilities of the network under noisy conditions in simulation and on a robot.

    @article{lirolem23574,
              volume = {15},
              number = {3},
               month = {September},
              author = {Thomas Haferlach and Jan Wessnitzer and Michael Mangan and Barbara Webb},
               title = {Evolving a neural model of insect path integration},
           publisher = {SAGE Publications},
                year = {2007},
             journal = {Adaptive Behavior},
               pages = {273--287},
            keywords = {ARRAY(0x7fdc781217d0)},
                 url = {http://eprints.lincoln.ac.uk/23574/},
            abstract = {Path integration is an important navigation strategy in many animal species. We use a genetic algorithm to evolve a novel neural model of path integration, based on input from cells that encode the heading of the agent in a manner comparable to the polarization-sensitive interneurons found in insects. The home vector is encoded as a population code across a circular array of cells that integrate this input. This code can be used to control return to the home position. We demonstrate the capabilities of the network under noisy conditions in simulation and on a robot.}
    }
  • M. Magnusson, A. Lilienthals, and T. Duckett, “Scan registration for autonomous mining vehicles using 3D-NDT,” Journal of Field Robotics, vol. 24, iss. 10, pp. 803-827, 2007.
    [BibTeX] [Abstract] [EPrints]

    Scan registration is an essential subtask when building maps based on range finder data from mobile robots. The problem is to deduce how the robot has moved between consecutive scans, based on the shape of overlapping portions of the scans. This paper presents a new algorithm for registration of 3D data. The algorithm is a generalization and improvement of the normal distributions transform (NDT) for 2D data developed by Biber and Strasser, which allows for accurate registration using a memory-efficient representation of the scan surface. A detailed quantitative and qualitative comparison of the new algorithm with the 3D version of the popular ICP (iterative closest point) algorithm is presented. Results with actual mine data, some of which were collected with a new prototype 3D laser scanner, show that the presented algorithm is faster and slightly more reliable than the standard ICP algorithm for 3D registration, while using a more memory efficient scan surface representation.

    @article{lirolem1615,
              volume = {24},
              number = {10},
              author = {Martin Magnusson and Achim Lilienthals and Tom Duckett},
                note = {Scan registration is an essential subtask when building maps based on range finder data from mobile robots. The problem is to deduce how the robot has moved between consecutive scans, based on the shape of overlapping portions of the scans. This paper presents a new algorithm for registration of 3D data. The algorithm is a generalization and improvement of the normal distributions transform (NDT) for 2D data developed by Biber and Strasser, which allows for accurate registration using a memory-efficient representation of the scan surface. A detailed quantitative and qualitative comparison of the new algorithm with the 3D version of the popular ICP (iterative closest point) algorithm is presented. Results with actual mine data, some of which were collected with a new prototype 3D laser scanner, show that the presented algorithm is faster and slightly more reliable than the standard ICP algorithm for 3D registration, while using a more memory efficient scan surface representation.},
               title = {Scan registration for autonomous mining vehicles using 3D-NDT},
           publisher = {Wiley Periodicals, Inc.},
                year = {2007},
             journal = {Journal of Field Robotics},
               pages = {803--827},
            keywords = {ARRAY(0x7fdc7811e428)},
                 url = {http://eprints.lincoln.ac.uk/1615/},
            abstract = {Scan registration is an essential subtask when building maps based on range finder data from mobile robots. The problem is to deduce how the robot has moved between consecutive scans, based on the shape of overlapping portions of the scans. This paper presents a new algorithm for registration of 3D data. The algorithm is a generalization and improvement of the normal distributions transform (NDT) for 2D data developed by Biber and Strasser, which allows for accurate registration using a memory-efficient representation of the scan surface. A detailed quantitative and qualitative comparison of the new algorithm with the 3D version of the popular ICP (iterative closest point) algorithm is presented. Results with actual mine data, some of which were collected with a new prototype 3D laser scanner, show that the presented algorithm is faster and slightly more reliable than the standard ICP algorithm for 3D registration, while using a more memory efficient scan surface representation.}
    }
  • P. Munkevik, G. Hall, and T. Duckett, “A computer vision system for appearance-based descriptive sensory evaluation of meals,” Journal of Food Engineering, vol. 78, iss. 1, pp. 246-256, 2007.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a complete machine vision system for automatic descriptive sensory evaluation of meals. A human sensory panel first developed a set of 72 sensory attributes describing the appearance of a prototypical meal, and then evaluated the intensities of those attributes on a data set of 58 images of example meals. This data was then used both to train and validate the performance of the artificial system. This system covers all stages of image analysis from pre-processing to pattern recognition, including novel techniques for enhancing the segmentation of meal components and extracting image features that mimic the attributes developed by the panel. Artificial neural networks were used to learn the mapping from image features to attribute intensity values. The results showed that the new system was extremely good in learning and reproducing the opinion of the human sensory experts, achieving almost the same performance as the panel members themselves.

    @article{lirolem1684,
              volume = {78},
              number = {1},
               month = {January},
              author = {Per Munkevik  and Gunnar Hall and Tom Duckett},
               title = {A computer vision system for appearance-based descriptive sensory evaluation of meals},
           publisher = {Elsevier},
                year = {2007},
             journal = {Journal of Food Engineering},
               pages = {246--256},
            keywords = {ARRAY(0x7fdc7812b188)},
                 url = {http://eprints.lincoln.ac.uk/1684/},
            abstract = {This paper presents a complete machine vision system for automatic descriptive sensory evaluation of meals. A human sensory panel
    first developed a set of 72 sensory attributes describing the appearance of a prototypical meal, and then evaluated the intensities of those attributes on a data set of 58 images of example meals. This data was then used both to train and validate the performance of the artificial system. This system covers all stages of image analysis from pre-processing to pattern recognition, including novel techniques for enhancing the segmentation of meal components and extracting image features that mimic the attributes developed by the panel. Artificial neural networks were used to learn the mapping from image features to attribute intensity values. The results showed that the new system was extremely good in learning and reproducing the opinion of the human sensory experts, achieving almost the same performance as the panel members themselves.}
    }
  • F. Schubert, T. P. Spexard, M. Hanheide, and S. Wachsmuth, “Active vision-based localization for robots in a home-tour scenario,” in 5th International Conference on Computer Vision Systems (ICVS 2007), 2007.
    [BibTeX] [Abstract] [EPrints]

    Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned.

    @inproceedings{lirolem6939,
           booktitle = {5th International Conference on Computer Vision Systems (ICVS 2007)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Active vision-based localization for robots in a home-tour scenario},
              author = {Falk Schubert and Thorsten P. Spexard and Marc Hanheide and Sven Wachsmuth},
           publisher = {Applied Computer Science Group, Bielefeld University, Germany},
                year = {2007},
                note = {Self-Localization is a crucial task for mobile robots. It is not only a requirement
    for auto navigation but also provides contextual information to support
    human robot interaction (HRI). In this paper we present an active vision-based
    localization method for integration in a complex robot system to work in human
    interaction scenarios (e.g. home-tour) in a real world apartment. The holistic
    features used are robust to illumination and structural changes in the scene. The
    system uses only a single pan-tilt camera shared between different vision applications
    running in parallel to reduce the number of sensors. Additional information
    from other modalities (like laser scanners) can be used, profiting of an integration
    into an existing system. The camera view can be actively adapted and the
    evaluation showed that different rooms can be discerned.},
            keywords = {ARRAY(0x7fdc78146df0)},
                 url = {http://eprints.lincoln.ac.uk/6939/},
            abstract = {Self-Localization is a crucial task for mobile robots. It is not only a requirement
    for auto navigation but also provides contextual information to support
    human robot interaction (HRI). In this paper we present an active vision-based
    localization method for integration in a complex robot system to work in human
    interaction scenarios (e.g. home-tour) in a real world apartment. The holistic
    features used are robust to illumination and structural changes in the scene. The
    system uses only a single pan-tilt camera shared between different vision applications
    running in parallel to reduce the number of sensors. Additional information
    from other modalities (like laser scanners) can be used, profiting of an integration
    into an existing system. The camera view can be actively adapted and the
    evaluation showed that different rooms can be discerned.}
    }
  • H. Siegl, M. Hanheide, S. Wrede, and A. Pinz, “An augmented reality human?computer interface for object localization in a cognitive vision system,” Image and Vision Computing, vol. 25, iss. 12, pp. 1895-1903, 2007.
    [BibTeX] [Abstract] [EPrints]

    The European Cognitive Vision project VAMPIRE uses mobile AR-kits to interact with a visual active memory for teaching and retrieval purposes. This paper describes concept and technical realization of the used mobile AR-kits and discusses interactive learning and retrieval in office environments, and the active memory infrastructure. The focus is on 3D interaction for pointing in a scene coordinate system. This is achieved by 3D augmented pointing, which combines inside-out tracking for head pose recovery and 3D stereo human?computer interaction. Experimental evaluation shows that the accuracy of this 3D cursor is within a few centimeters, which is sufficient to point at an object in an office. Finally, an application of the cursor in VAMPIRE is presented, where in addition to the mobile system, at least one stationary active camera is used to obtain different views of an object. There are many potential applications, for example an improved view-based object recognition.

    @article{lirolem6712,
              volume = {25},
              number = {12},
               month = {December},
              author = {H. Siegl and Marc Hanheide and S. Wrede and A. Pinz},
                note = {The European Cognitive Vision project VAMPIRE uses mobile AR-kits to interact with a visual active memory for teaching and retrieval purposes. This paper describes concept and technical realization of the used mobile AR-kits and discusses interactive learning and retrieval in office environments, and the active memory infrastructure. The focus is on 3D interaction for pointing in a scene coordinate system. This is achieved by 3D augmented pointing, which combines inside-out tracking for head pose recovery and 3D stereo human?computer interaction. Experimental evaluation shows that the accuracy of this 3D cursor is within a few centimeters, which is sufficient to point at an object in an office. Finally, an application of the cursor in VAMPIRE is presented, where in addition to the mobile system, at least one stationary active camera is used to obtain different views of an object. There are many potential applications, for example an improved view-based object recognition.},
               title = {An augmented reality human?computer interface for object localization in a cognitive vision system},
           publisher = {Elsevier},
                year = {2007},
             journal = {Image and Vision Computing},
               pages = {1895--1903},
            keywords = {ARRAY(0x7fdc78121a88)},
                 url = {http://eprints.lincoln.ac.uk/6712/},
            abstract = {The European Cognitive Vision project VAMPIRE uses mobile AR-kits to interact with a visual active memory for teaching and retrieval purposes. This paper describes concept and technical realization of the used mobile AR-kits and discusses interactive learning and retrieval in office environments, and the active memory infrastructure. The focus is on 3D interaction for pointing in a scene coordinate system. This is achieved by 3D augmented pointing, which combines inside-out tracking for head pose recovery and 3D stereo human?computer interaction. Experimental evaluation shows that the accuracy of this 3D cursor is within a few centimeters, which is sufficient to point at an object in an office. Finally, an application of the cursor in VAMPIRE is presented, where in addition to the mobile system, at least one stationary active camera is used to obtain different views of an object. There are many potential applications, for example an improved view-based object recognition.}
    }
  • H. Siegl, M. Hanheide, S. Wrede, and A. Pinz, “An augmented reality human-computer interface for object localization in a cognitive vision system,” Image and Vision Computing, vol. 25, iss. 12, pp. 1895-1903, 2007.
    [BibTeX] [Abstract] [EPrints]

    The European Cognitive Vision project VAMPIRE uses mobile AR-kits to interact with a visual active memory for teaching and retrieval purposes. This paper describes concept and technical realization of the used mobile AR-kits and discusses interactive learning and retrieval in office environments, and the active memory infrastructure. The focus is on 3D interaction for pointing in a scene coordinate system. This is achieved by 3D augmented pointing, which combines inside-out tracking for head pose recovery and 3D stereo human-computer interaction. Experimental evaluation shows that the accuracy of this 3D cursor is within a few centimeters, which is sufficient to point at an object in an office. Finally, an application of the cursor in VAMPIRE is presented, where in addition to the mobile system, at least one stationary active camera is used to obtain different views of an object. There are many potential applications, for example an improved view-based object recognition. Â\copyright 2006 Elsevier B.V. All rights reserved.

    @article{lirolem8339,
              volume = {25},
              number = {12},
               month = {December},
              author = {H. Siegl and Marc Hanheide and S. Wrede and A. Pinz},
               title = {An augmented reality human-computer interface for object localization in a cognitive vision system},
           publisher = {Elsevier},
                year = {2007},
             journal = {Image and Vision Computing},
               pages = {1895--1903},
            keywords = {ARRAY(0x7fdc78146f28)},
                 url = {http://eprints.lincoln.ac.uk/8339/},
            abstract = {The European Cognitive Vision project VAMPIRE uses mobile AR-kits to interact with a visual active memory for teaching and retrieval purposes. This paper describes concept and technical realization of the used mobile AR-kits and discusses interactive learning and retrieval in office environments, and the active memory infrastructure. The focus is on 3D interaction for pointing in a scene coordinate system. This is achieved by 3D augmented pointing, which combines inside-out tracking for head pose recovery and 3D stereo human-computer interaction. Experimental evaluation shows that the accuracy of this 3D cursor is within a few centimeters, which is sufficient to point at an object in an office. Finally, an application of the cursor in VAMPIRE is presented, where in addition to the mobile system, at least one stationary active camera is used to obtain different views of an object. There are many potential applications, for example an improved view-based object recognition. {\^A}{\copyright} 2006 Elsevier B.V. All rights reserved.}
    }
  • T. Spexard, S. Li, B. Wrede, M. Hanheide, E. A. Topp, and H. Huttenrauch, “Interaction awareness for joint environment exploration,” in RO-MAN 2007 – The 16th IEEE International Symposium on Robot and Human Interactive Communication, 2007, pp. 546-551.
    [BibTeX] [Abstract] [EPrints]

    An important goal for research on service robots is the cooperation of a human and a robot as team. A service robot in a domestic environment needs to build a representation of its future workspace that corresponds to the human user’s understanding of these surroundings. But it also needs to apply this model about the "where" and "what" in its current interaction to allow communication about objects and places in a human-adequate way. In this paper we present the integration of a hierarchical robotic mapping system into an interactive framework controlled by a dialog system. The goal is to use interactively acquired environment models to implement a robot with interaction aware behaviors. A major contribution of this work is a three-level hierarchy of spatial representation affecting three different communication dimensions. This hierarchy is consequently applied in the design of the grounding-based dialog, laser-based topological mapping, and an objects attention system. We demonstrate the benefits of this integration for learning and tour guiding in a humancomprehensible interaction between a robot and its user in a home-tour scenario. The enhanced interaction capabilities are crucial for developing a new generation of robots that will be accepted not only as service robots but also as robot companions.

    @inproceedings{lirolem6940,
               month = {August},
              author = {Thorsten Spexard and Shuyin Li and Britta Wrede and Marc Hanheide and Elin A. Topp and Helge Huttenrauch},
                note = {An important goal for research on service robots
    is the cooperation of a human and a robot as team. A
    service robot in a domestic environment needs to build a
    representation of its future workspace that corresponds to
    the human user's understanding of these surroundings. But
    it also needs to apply this model about the "where" and
    "what" in its current interaction to allow communication about
    objects and places in a human-adequate way. In this paper
    we present the integration of a hierarchical robotic mapping
    system into an interactive framework controlled by a dialog
    system. The goal is to use interactively acquired environment
    models to implement a robot with interaction aware behaviors.
    A major contribution of this work is a three-level hierarchy of
    spatial representation affecting three different communication
    dimensions. This hierarchy is consequently applied in the design
    of the grounding-based dialog, laser-based topological mapping,
    and an objects attention system. We demonstrate the benefits
    of this integration for learning and tour guiding in a humancomprehensible
    interaction between a robot and its user in
    a home-tour scenario. The enhanced interaction capabilities
    are crucial for developing a new generation of robots that
    will be accepted not only as service robots but also as robot
    companions.},
           booktitle = {RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication},
              editor = {B. Gottfried and H. Aghajan},
               title = {Interaction awareness for joint environment exploration},
               pages = {546--551},
                year = {2007},
            keywords = {ARRAY(0x7fdc78181220)},
                 url = {http://eprints.lincoln.ac.uk/6940/},
            abstract = {An important goal for research on service robots
    is the cooperation of a human and a robot as team. A
    service robot in a domestic environment needs to build a
    representation of its future workspace that corresponds to
    the human user's understanding of these surroundings. But
    it also needs to apply this model about the "where" and
    "what" in its current interaction to allow communication about
    objects and places in a human-adequate way. In this paper
    we present the integration of a hierarchical robotic mapping
    system into an interactive framework controlled by a dialog
    system. The goal is to use interactively acquired environment
    models to implement a robot with interaction aware behaviors.
    A major contribution of this work is a three-level hierarchy of
    spatial representation affecting three different communication
    dimensions. This hierarchy is consequently applied in the design
    of the grounding-based dialog, laser-based topological mapping,
    and an objects attention system. We demonstrate the benefits
    of this integration for learning and tour guiding in a humancomprehensible
    interaction between a robot and its user in
    a home-tour scenario. The enhanced interaction capabilities
    are crucial for developing a new generation of robots that
    will be accepted not only as service robots but also as robot
    companions.}
    }
  • T. P. Spexard, M. Hanheide, and G. Sagerer, “Human-oriented interaction with an anthropomorphic robot,” Robotics, IEEE Transactions on, vol. 23, iss. 5, pp. 852-862, 2007.
    [BibTeX] [Abstract] [EPrints]

    A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g., human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human- human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction.

    @article{lirolem6741,
              volume = {23},
              number = {5},
               month = {October},
              author = {Thorsten P. Spexard and Marc Hanheide and Gerhard Sagerer},
                note = {A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g., human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human- human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction.},
               title = {Human-oriented interaction with an anthropomorphic robot},
           publisher = {IEEE},
                year = {2007},
             journal = {Robotics, IEEE Transactions on},
               pages = {852--862},
            keywords = {ARRAY(0x7fdc7809db78)},
                 url = {http://eprints.lincoln.ac.uk/6741/},
            abstract = {A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g., human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human- human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction.}
    }
  • C. Valgren, T. Duckett, and A. Lilienthal, “Incremental spectral clustering and its application to topological mapping,” in Proceedings of ICRA-2007, IEEE International Conference on Robotics and Automation, 2007.
    [BibTeX] [Abstract] [EPrints]

    This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental ? the spectral clustering algorithm is applied to the affinity matrix after each row/column is added ? which makes it possible to inspect the clusters as new data points are added. The method is well suited to the problem of appearance-based, on-line topological mapping for mobile robots. In this problem domain, we show that we can reduce environment-dependent parameters of the clustering algorithm to just a single, intuitive parameter. Experimental results in large outdoor and indoor environments show that we can close loops correctly by computing only a fraction of the entries in the affinity matrix. The accompanying video clip shows how an example map is produced by the algorithm.

    @inproceedings{lirolem1685,
           booktitle = {Proceedings of ICRA-2007, IEEE International Conference on Robotics and Automation},
               month = {March},
               title = {Incremental spectral clustering and its application to topological mapping},
              author = {Christoffer Valgren and Tom  Duckett and Achim Lilienthal },
                year = {2007},
                note = {This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental ? the
    spectral clustering algorithm is applied to the affinity matrix after each row/column is added ? which makes it possible to inspect the clusters as new data points are added. The method is well suited to the problem of appearance-based, on-line topological mapping for mobile robots. In this problem domain, we show that we can reduce environment-dependent parameters of the clustering algorithm to just a single, intuitive parameter. Experimental results in large outdoor and indoor environments
    show that we can close loops correctly by computing only a fraction of the entries in the affinity matrix. The accompanying video clip shows how an example map is produced by the
    algorithm.},
            keywords = {ARRAY(0x7fdc7803f9c8)},
                 url = {http://eprints.lincoln.ac.uk/1685/},
            abstract = {This paper presents a novel use of spectral clustering algorithms to support cases where the entries in the affinity matrix are costly to compute. The method is incremental ? the
    spectral clustering algorithm is applied to the affinity matrix after each row/column is added ? which makes it possible to inspect the clusters as new data points are added. The method is well suited to the problem of appearance-based, on-line topological mapping for mobile robots. In this problem domain, we show that we can reduce environment-dependent parameters of the clustering algorithm to just a single, intuitive parameter. Experimental results in large outdoor and indoor environments
    show that we can close loops correctly by computing only a fraction of the entries in the affinity matrix. The accompanying video clip shows how an example map is produced by the
    algorithm.}
    }
  • S. Wachsmuth, S. Wrede, and M. Hanheide, “Coordinating interactive vision behaviors for cognitive assistance,” Computer Vision and Image Understanding, vol. 108, iss. 1-2, pp. 135-149, 2007.
    [BibTeX] [Abstract] [EPrints]

    Most of the research conducted in human-computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human-computer interaction. We report about results from first user studies.

    @article{lirolem6701,
              volume = {108},
              number = {1-2},
               month = {October},
              author = {Sven Wachsmuth and Sebastian Wrede and Marc Hanheide},
                note = {Most of the research conducted in human-computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human-computer interaction. We report about results from first user studies.},
               title = {Coordinating interactive vision behaviors for cognitive assistance},
           publisher = {Springer},
                year = {2007},
             journal = {Computer Vision and Image Understanding},
               pages = {135--149},
            keywords = {ARRAY(0x7fdc781aabd8)},
                 url = {http://eprints.lincoln.ac.uk/6701/},
            abstract = {Most of the research conducted in human-computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human-computer interaction. We report about results from first user studies.}
    }
  • S. Yue and C. F. Rind, “A synthetic vision system using directionally selective motion detectors to recognize collision,” Artificial life, vol. 13, iss. 2, pp. 93-122, 2007.
    [BibTeX] [Abstract] [EPrints]

    Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.

    @article{lirolem1218,
              volume = {13},
              number = {2},
               month = {March},
              author = {Shigang Yue and F. Claire Rind},
                note = {Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.},
               title = {A synthetic vision system using directionally selective motion detectors to recognize collision},
           publisher = {MIT Press},
                year = {2007},
             journal = {Artificial life},
               pages = {93--122},
            keywords = {ARRAY(0x7fdc781556d8)},
                 url = {http://eprints.lincoln.ac.uk/1218/},
            abstract = {Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.}
    }

2006

  • M. Hanheide, N. Hofemann, and G. Sagerer, “Action recognition in a wearable assistance system,” in 18th International Conference on Pattern Recognition (ICPR’06), 2006, pp. 1254-1258.
    [BibTeX] [Abstract] [EPrints]

    Enabling artificial systems to recognize human actions is a requisite to develop intelligent assistance systems that are able to instruct and supervise users in accomplishing tasks. In order to enable an assistance system to be wearable, head-mounted cameras allow to perceive a scene visually from a user?s perspective. But realizing action recognition without any static sensors causes special challenges. The movement of the camera is directly related to the user?s head motion and not controlled by the system. In this paper we present how a trajectory-based action recognition can be combined with object recognition, visual tracking, and a background motion compensation to be applicable in such a wearable assistance system. The suitability of our approach is proved by user studies in an object manipulation scenario.

    @inproceedings{lirolem6941,
               month = {August},
              author = {Marc Hanheide and Nils Hofemann and Gerhard Sagerer},
                note = {Enabling artificial systems to recognize human actions
    is a requisite to develop intelligent assistance systems that
    are able to instruct and supervise users in accomplishing
    tasks. In order to enable an assistance system to be wearable,
    head-mounted cameras allow to perceive a scene visually
    from a user?s perspective. But realizing action recognition
    without any static sensors causes special challenges.
    The movement of the camera is directly related to the user?s
    head motion and not controlled by the system. In this paper
    we present how a trajectory-based action recognition can
    be combined with object recognition, visual tracking, and a
    background motion compensation to be applicable in such
    a wearable assistance system. The suitability of our approach
    is proved by user studies in an object manipulation
    scenario.},
           booktitle = {18th International Conference on Pattern Recognition (ICPR'06)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Action recognition in a wearable assistance system},
           publisher = {IEEE Computer Society},
                year = {2006},
               pages = {1254--1258},
            keywords = {ARRAY(0x7fdc78002520)},
                 url = {http://eprints.lincoln.ac.uk/6941/},
            abstract = {Enabling artificial systems to recognize human actions
    is a requisite to develop intelligent assistance systems that
    are able to instruct and supervise users in accomplishing
    tasks. In order to enable an assistance system to be wearable,
    head-mounted cameras allow to perceive a scene visually
    from a user?s perspective. But realizing action recognition
    without any static sensors causes special challenges.
    The movement of the camera is directly related to the user?s
    head motion and not controlled by the system. In this paper
    we present how a trajectory-based action recognition can
    be combined with object recognition, visual tracking, and a
    background motion compensation to be applicable in such
    a wearable assistance system. The suitability of our approach
    is proved by user studies in an object manipulation
    scenario.}
    }
  • M. Hanheide, “A cognitive ego-vision system for interactive assistance,” PhD Thesis, 2006.
    [BibTeX] [Abstract] [EPrints]

    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples’ everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user’s (visual) perspective and integrate the human in the system’s processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user’s visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user’s own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies.

    @phdthesis{lirolem6743,
               month = {October},
               title = {A cognitive ego-vision system for interactive assistance},
              school = {Universitat Bielefeld},
              author = {Marc Hanheide},
                year = {2006},
                note = {With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies.},
            keywords = {ARRAY(0x7fdc78146ec8)},
                 url = {http://eprints.lincoln.ac.uk/6743/},
            abstract = {With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies.}
    }
  • A. E. Lilienthal, A. Loutfi, and T. Duckett, “Airborne chemical sensing with mobile robots,” Sensors, vol. 6, iss. 11, pp. 1616-1678, 2006.
    [BibTeX] [Abstract] [EPrints]

    Airborne chemical sensing with mobile robots has been an active research areasince the beginning of the 1990s. This article presents a review of research work in this field,including gas distribution mapping, trail guidance, and the different subtasks of gas sourcelocalisation. Due to the difficulty of modelling gas distribution in a real world environmentwith currently available simulation techniques, we focus largely on experimental work and donot consider publications that are purely based on simulations.

    @article{lirolem13378,
              volume = {6},
              number = {11},
               month = {November},
              author = {Achim {\ensuremath{|}}J. Lilienthal and Amy Loutfi and Tom Duckett},
                note = {This article belongs to the Special Issue Gas Sensors},
               title = {Airborne chemical sensing with mobile robots},
           publisher = {MDPI},
                year = {2006},
             journal = {Sensors},
               pages = {1616--1678},
            keywords = {ARRAY(0x7fdc7806ec68)},
                 url = {http://eprints.lincoln.ac.uk/13378/},
            abstract = {Airborne chemical sensing with mobile robots has been an active research areasince the beginning of the 1990s. This article presents a review of research work in this field,including gas distribution mapping, trail guidance, and the different subtasks of gas sourcelocalisation. Due to the difficulty of modelling gas distribution in a real world environmentwith currently available simulation techniques, we focus largely on experimental work and donot consider publications that are purely based on simulations.}
    }
  • F. Lomker, S. Wrede, M. Hanheide, and J. Fritsch, “Building modular vision systems with a graphical plugin environment,” in Fourth IEEE International Conference on Computer Vision Systems (ICVS’06), 2006.
    [BibTeX] [Abstract] [EPrints]

    With the increasing interest in computer vision for interactive systems, the challenges of the development process involving many researchers are becoming more prominent. Issues like reuse of algorithms, modularity, and distributed processing are getting more important in the endeavor of building complex vision systems. We present a framework that allows independent development of enclosed components and supports interactive optimization of algorithmic parameters in an online fashion. The communication between components is performed nearly without any slow down compared to a monolithic system. Through the modular concept, all components can be flexibly distributed and reused in other application domains. The suitability of the approach is demonstrated with an example system.

    @inproceedings{lirolem6942,
               month = {January},
              author = {Frank Lomker and Sebastian Wrede and Marc Hanheide and Jannik Fritsch},
                note = {With the increasing interest in computer vision for interactive
    systems, the challenges of the development process
    involving many researchers are becoming more prominent.
    Issues like reuse of algorithms, modularity, and distributed
    processing are getting more important in the endeavor of
    building complex vision systems. We present a framework
    that allows independent development of enclosed components
    and supports interactive optimization of algorithmic
    parameters in an online fashion. The communication between
    components is performed nearly without any slow
    down compared to a monolithic system. Through the modular
    concept, all components can be flexibly distributed and
    reused in other application domains. The suitability of the
    approach is demonstrated with an example system.},
           booktitle = {Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)},
              editor = {B. Gottfried and H. Aghajan},
               title = {Building modular vision systems with a graphical plugin environment},
           publisher = {IEEE Computer Society},
                year = {2006},
            keywords = {ARRAY(0x7fdc78049ab8)},
                 url = {http://eprints.lincoln.ac.uk/6942/},
            abstract = {With the increasing interest in computer vision for interactive
    systems, the challenges of the development process
    involving many researchers are becoming more prominent.
    Issues like reuse of algorithms, modularity, and distributed
    processing are getting more important in the endeavor of
    building complex vision systems. We present a framework
    that allows independent development of enclosed components
    and supports interactive optimization of algorithmic
    parameters in an online fashion. The communication between
    components is performed nearly without any slow
    down compared to a monolithic system. Through th