83 research outputs found

    Adaptive Multi Agent System for Guiding Groups of People in Urban Areas

    Get PDF
    Abstract This article presents a new approach for guiding a group of people using an adaptive multi agent system. For the simulations of the group of people we use social forces, with theses forces human motion is controlled depending on the dynamic environment. To get the group of people being guide we use a set of agents that work cooperatively and they adapt their behavior according to the situation where they are working and how people react. For that reason, we present a model that overcomes the limitations of existing approaches, which are either tailored to tightly bounded environments, or based on unrealistic human behaviors. In particular we define a Discrete-Time- Motion model, which from one side represents the environment by means of a potential field, and on the other hand the motion models for people and robots respond to realistic situations, and for instance human behaviors such as leaving the group are considered. Furthermore, we present an analysis of forces actuating among agents and humans throughout simulations of different situations of robot and human configurations and behaviors. Finally, a new model of multi-robot task allocation applied to people guidance in urban settings is presented. The developed architecture overcomes some of the limitations of existing approaches, such as emergent cooperation or resource sharing.

    Interactive multiple object learning with scanty human supervision

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier.; We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds. (C) 2016 Elsevier Inc. All rights reserved.Peer ReviewedPostprint (author's final draft

    On-line adaptive side-by-side human robot companion to approach a moving person to interact

    Get PDF
    The final publication is available at link.springer.comIn this paper, we present an on-line adaptive side-by-side human-robot companion to approach a moving person to interact with. Our framework makes the pair robot-human capable of overpass, in a joint way, the dynamic and static obstacles of the environment while they reach a moving goal, which is the person who wants to interact with the pair. We have defined a new moving final goal that depends on the environment, the movement of the group and the movement of the interacting person. Moreover, we modified the Extended Social Force model to include this new moving goal. The method has been validated over several situations in simulation. This work is an extension of the On-line adaptive side-by-side human robot companion in dynamic urban environments, IROS2017.Peer ReviewedPostprint (author's final draft

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft

    Searching and tracking people in urban environments with static and dynamic obstacles

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Searching and tracking people in crowded urban areas where they can be occluded by static or dynamic obstacles is an important behavior for social robots which assist humans in urban outdoor environments. In this work, we propose a method that can handle in real-time searching and tracking people using a Highest Belief Particle Filter Searcher and Tracker. It makes use of a modified Particle Filter (PF), which, in contrast to other methods, can do both searching and tracking of a person under uncertainty, with false negative detections, lack of a person detection, in continuous space and real-time. Moreover, this method uses dynamic obstacles to improve the predicted possible location of the person. Comparisons have been made with our previous method, the Adaptive Highest Belief Continuous Real-time POMCP Follower, in different conditions and with dynamic obstacles. Real-life experiments have been done during two weeks with a mobile service robot in two urban environments of Barcelona with other people walking around.Peer ReviewedPostprint (author's final draft

    Teaching robot’s proactive behavior using human assistance

    Get PDF
    The final publication is available at link.springer.comIn recent years, there has been a growing interest in enabling autonomous social robots to interact with people. However, many questions remain unresolved regarding the social capabilities robots should have in order to perform this interaction in an ever more natural manner. In this paper, we tackle this problem through a comprehensive study of various topics involved in the interaction between a mobile robot and untrained human volunteers for a variety of tasks. In particular, this work presents a framework that enables the robot to proactively approach people and establish friendly interaction. To this end, we provided the robot with several perception and action skills, such as that of detecting people, planning an approach and communicating the intention to initiate a conversation while expressing an emotional status.We also introduce an interactive learning system that uses the person’s volunteered assistance to incrementally improve the robot’s perception skills. As a proof of concept, we focus on the particular task of online face learning and recognition. We conducted real-life experiments with our Tibi robot to validate the framework during the interaction process. Within this study, several surveys and user studies have been realized to reveal the social acceptability of the robot within the context of different tasks.Peer ReviewedPostprint (author's final draft

    Robot social-aware navigation framework to accompany people walking side-by-side

    Get PDF
    The final publication is available at link.springer.comWe present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model,Peer ReviewedPostprint (author's final draft

    Cooperative robots in people guidance mission: DTM model validation and local optimization motion

    Get PDF
    This work presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. This problem is addressed by introducing a “Discrete Time Motion” model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. Furthermore, an analysis of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors. Finally, we describe the process of modeling and validation by simulation that have been used to explore the new possibilities of interaction when humans are guided by teams of robots that work cooperatively in urban areas.Peer ReviewedPostprint (published version

    Aerial social force model: a new framework to accompany people using autonomous flying robots

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We proposed a novel Aerial Social Force Model (ASFM) that allows autonomous flying robots to accompany humans in urban environments in a safe and comfortable manner. To date, we are not aware of other state-of-the-art method that accomplish this task. The proposed approach is a 3D version of the Social Force Model (SFM) for the field of aerial robots which includes an interactive human-robot navigation scheme capable of predicting human motions and intentions so as to safely accompany them to their final destination. ASFM also introduces a new metric to fine-tune the parameters of the force model, and to evaluate the performance of the aerial robot companion based on comfort and distance between the robot and humans. The presented approach is extensively validated in diverse simulations and real experiments, and compared against other similar works in the literature. ASFM attains remarkable results and shows that it is a valuable framework for social robotics applications, such as guiding people or human-robot interaction.Peer ReviewedPostprint (author's final draft

    Adaptive social planner to accompany people in real-life dynamic environments

    Get PDF
    Robots must develop the ability to socially navigate in uncontrolled urban environments to be able to be included in our daily lives. This paper presents a new robot navigation framework called the adaptive social planner (ASP) and a robotic system, which includes the ASP. Our results and previous work show that the ASP can adapt to different collaborative tasks involving humans and robots, such as independent robot navigation, human-robot accompaniment, a robot approaching people, robot navigation tasks that combine learning techniques, and human-drone interactions. Our approach in this paper focuses on demonstrating how the ASP can be customized to implement two new methods for group accompaniment: the adaptive social planner using a V-formation model to accompany groups of people (ASP-VG) and the adaptive social planner using a side-by-side model to accompany groups of people (ASP-SG). These two methods result in a robot accompanying groups of people by anticipating human and uncontrolled urban environment behaviors. Also, we develop four new robot skills to deal with unexpected human behaviors, such as rearrangement of the position of the companions inside the group, unforeseen changes in the velocity of the robot companions, occlusions among group members, and changes in the direction toward destinations in the environment. Moreover, we develop different performance metrics, based on social distances, to evaluate the tasks of the robot. In addition, we present the guidelines followed in performing the real-life experiments with volunteers, including a human-robot speech interaction to help humans create a relationship with the robot to be genuinely involved in the mutual accompaniment. Finally, we include an exhaustive validation of the methods by evaluating the behavior of the robot through synthetic and real-life experiments. We incorporate five user studies to evaluate aspects related to social acceptability and preferences of people regarding both types of robot group accompaniment.ROCOTRANSP: National Project. Funder: Ministerio de Ciencia e Innovacion (MCIN) y Agencia Española de Investigacion (AEI). Award Number: PID2019-106702RB-C21 MCIN / AEI/10.13039/501100011033. Grant Recipient: Alberto Sanfeliu Cortés (UPC). TERRINET: European Project. Funder: European Commission. Award Number: H2020-INFRAIA-2017-1-two-stage-730994. Grant Recipient: Alberto Sanfeliu Cortés (UPC).Peer ReviewedPreprin
    corecore