Members


Permanents

Name Affiliation E-mail Web page Twitter
Dr. Francisco J. Rodríguez Lera University of Luxemburg fjrodl@unileon.es
Dr. Francisco Martín Rico (Team leader) Rey Juan Carlos University francisco.rico@urjc.es https://gsyc.urjc.es/~fmartin/ @FMrico
Dr. Vicente Matellán Olivera University of León vmato@unileon.es http://robotica.unileon.es/vmo/ @vmatellan

Current non-permanent

Name Affiliation E-mail Web page Twitter
Eng. Jonathan Ginés Rey Juan Carlos University jonathangines@hotmail.com

Gone, but not forgotten

Name Affiliation E-mail Web page Twitter
Eng. Javier Gutierrez-Maturana Rey Juan Carlos University
Eng. Fernando Casado University of León
Eng. Álvaro Moreno Rey Juan Carlos University
Eng. Jesús Balsa University of León
Victor Martín University Rey Juan Carlos
Carlos E. Agüero University Rey Juan Carlos
Domeneç Puig University Rovira i Virgili
Tomás González University Rovira i Virgili
Miguel Cazorla Quevedo University of Alicante
Boyán Bonev University of Alicante
David Herrero University of Murcia
Humberto Barbera University of Murcia

Technical overview

ROS and ROS2 based development, running on board Pepper


Social Robotics

Speech understanding

Speech generation

Human Robot Interaction

Perception

Human detection

Object detection

Probabilistic mapping of objects/people

Techniques of Deep Learning

Mapping and localization

Long-term 3D mapping

Enhanced Localización (Amcl)

Long-term Navegación (Move_base)

Behavior generation

BICA (Behavior-based Iterative Component Architecture)

PDDL Planning

Development framework

Softbank provides a development framework called NaoQi for the Pepper robot. Although this framework is very powerful, we consider that it is not adequate to face the challenges proposed by RoboCup. For this reason we will use ROS as the main development framework. Softwbank provides an interface between NaoQi and ROS that runs remotely on a computer. We have run this software on board the robot. Our intention is to take advantage of all the available software in ROS to face the tests of RoboCup @ Home. In addition, we are working to also use ROS2 for decentralized, real-time and secure communications.

Dialogue generation

Communication with humans during the challenges will be multimodal. It will be done both through voice and through the tablet. In this way, any kind of blockage in the performance of a test, which may be due to problems in the interaction, will be avoided. Several alternatives to the dialogue will be used: the NaoQi modules developed for this purpose, and the Google Speech API. In addition, the tablet will feature an HTML5 interface with access to options and debugging information. We will use the RobotWebTools software to represent the necessary information on the robot tablet.

Navigation

We have done a lot of work on improving ROS navigation software for reliable, efficient and secure long-term navigation. Based on architectural maps (walls and doors), we add the furniture of the environment during the operation of the robot. We use perception of both the laser and the 3D camera of the robot to perceive the obstacles.

Behavior-based Architecture

We have reused the behavioral architecture developed for our participation in robot soccer: BICA. We have implemented this architecture within a ROS package, in which components are executed concurrently in a hierarchical way to generate complex behaviors. The lower level components are reactive, and the higher level components are implemented as state machines. For RoboCup 2018 at Montreal, we plan to include a PDDL planner inside Bica. We will use an implementation based of ROSPlan that will allow us to tackle more complex tasks.

Context awareness

Context-awareness is a fundamental component in human robot interaction. Understanding the user activity context the robot improves the overall decision-making process. This is because, given a context, the robot can reduce the set of feasible actions. We consider that using an on-board microphone and gathering environmental sounds we can improve the context recognition. For this purpose, we have designed, developed and tested a computational lightweight Environment Recognition Component (ERC). In this iteration our ERC is able to recognize a set of four home bells. This component provides infor- mation to a Context-Awareness Component (CAC) that implements a hierarchi- cal Bayesian network to tag user’s activities based on the American Occupational Therapy Association (AOTA) classification.

Object and people detection

We are working on a perception system based on Deep Learning. We use YOLO2 to detect objects and people in images. This software is a C/C ++ implementation of darknet, which is a convolutional neural network. This software is fully integrated in ROS through the darknet_ros package.

We run darknet ros using the images from the robot’s camera. Its output is a set of bounding boxes on the image. The 2D image is registered with the image of the RGBD camera. After applying a distance-based algorithm, we obtain the 3D position of those pixels belonging to the detected object or persons. This information is used to update the probability maps of objects and people. These maps, one for each detected category, is an octomap that represents the probability of finding an element of that category in space. Those positions where detection occurs, increase their probability, while the rest of the positions decreases their probability in time. The speed of ”forgetting” depends on whether a category belongs to objects of a static nature (furniture) or dynamic (people). These maps are coordinated with the maps used for navigation, so this probabilistic memory can be used to navigate to the position where an element is likely to be found. In Figure 4 you can see what the darknet output is, how we detect chairs in our laboratory, and how these chairs are incorporated into this probabilistic memory. Note that below the octomap is the map of the environ- ment that the robot uses to navigate.

In addition to this perception mechanism, we are working on an algorithm called Fast Training. The main idea is to have a convolution network trained to detect humans that we can be trained in situ to distinguish a person from the rest of the crowd. This algorithm will be used in the Help-me-carry test.

ROS2 and Security

We are planning to develop some of the modules in ROS2 in order to an efficient and convenient distribution of the components. Since our goal is to use our systems in real environments, we will implemented our software with security mechanisms that prevent intrusions, attacks and not allowed manipulations.


Publications


In @Home competitions



In SPL competitions (As TeamChaos and SPITeam)


In 4-legged competitions (As TeamChaos)


Downloads

We will publish ALL the software used in this competition under Open Source licenses.