Spatial Reasoning and Interaction for Real-World Robotics

Workshop at IROS '15

Abstract

The aim of this workshop is to bring together researchers working in the field of cognitive robotics with special interest in spatial reasoning, in particular experts in situated HRI and NLP (including semantic grounding, dialogue, multi-party interaction, etc.) and experts in autonomous mobile robotics (Navigation in dynamically changing environments, moving obstacle recognition, motion estimation and path planning, multi-robot systems).

Spatial Reasoning

Cognitive Human-Robot Interaction

Situated Human-Robot Interaction

Environment Modelling

List of presenters

Presenter Affiliation
John KelleherDublin Institute of Technology, Ireland
Mary Ellen FosterUniversity of Glasgow, UK
Dimitra GkatziaHeriot Watt University, Edinburgh, UK
Luca IocchiSapienza Universita di Roma, Rome, Italy
Diedrich WolterUniversität Bamberg, Germany
Christian LandsiedelTechnische Universität München, Munich, Germany
Dirk WollherrTechnische Universität München, Munich, Germany
Verena RieserHeriot Watt University, Edinburgh, UK

Program

Time Talk
14:00 - 14:30 Verena Rieser, Dirk Wollherr: Introduction
14:30 - 15:00 John Kelleher: Referring Expressions in the Context of Perception Errors in Situated Dialogue in the Toy Block Experiment
We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot's perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot's understanding of the environment. In this work, we describe the effects that the robot's perceptual errors and the information request options available to the participant had on the reformulation of the referring expressions the participants used when resolving a unsuccessful reference.
15:00 - 15:30 Mary Ellen Foster: Natural Face-to-face Conversation with Humanoid Robots
When humans engage in face-to-face conversation, they use their voices, faces, and bodies together in a rich, multimodal, continuous, interactive process. For a robot to participate fully in this sort of natural, face-to-face conversation in the real world, it must also be able not only to understand the multimodal communicative signals of its human partners, but also to produce understandable, appropriate, and natural communicative signals in response. A robot capable of this form of interaction can be used in a large number of areas: for example, it could take the role of a home companion, a museum tour guide, a tutor, or a personal health coach. While a number of such robots have been successfully deployed, the full potential of socially interactive robots has not been realised, due both to incomplete models of human multimodal communication and to technical limitations. However, thanks to recent developments in a number of areas — including techniques for data-driven interaction models, methods of evaluating interactive robots in real-world contexts, and off-the-shelf component technology — the goal of developing a naturally interactive robot is now increasingly achievable.
15:30 - 15:45Coffee Break
15:45 - 16:15 Dimitra Gkatzia: From the Virtual to the Real World: Referring to Objects in Real-World Spatial Scenes
Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems and robots. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we will discuss implications for systems that produce Natural Language such as robots and we will discuss future directions.
16:15 - 16:45 Luca Iocchi: Approaching Qualitative Spatial Reasoning About Distances and Directions in Robotics
One of the long-term goals of our society is to build robots able to live side by side with humans. In order to do so, robots need to be able to reason in a qualitative way. To this end, over the last years, the Artificial Intelligence research community has developed a considerable amount of qualitative reasoners. The majority of such approaches, however, has been developed under the assumption that suitable representations of the world were available. In this paper, we propose a method for performing qualitative spatial reasoning in robotics on abstract representations of environments, automatically extracted from metric maps. Both the representation and the reasoner are used to perform the grounding of commands vocally given by the user. The approach has been verified on a real robot interacting with several non-expert users.
16:45 - 17:00Coffee Break
17:00 - 17:30 Diedrich Wolter: Action Verbalisation in Joint Human-Robot Activities using Qualitative Spatial Representation and Reasoning
We present an approach to integrate planning and verbalising displacement actions by a robot in a cooperative BLOCKS world tower building scenario. Our hybrid planning approach uses Qualitative Spatial Logic (QSL) to combine abstract symbolic planning with quantitative action planning. We conceptualise displacement actions as desired results with respect to position-orientation pairs of a manipulated object, and describe them using the Probabilistic Reference And GRounding framework (PRAGR). We make use of the qualitative discretisation performed by the planner in order to provide the REG engine with a conceptually relevant, finite set of distractor position-orientation pairs from which the target configuration should be discriminated. As PRAGR provides an evaluation of the likelihood of a resulting action to satisfy the robot's requirements, the planner can not only determine the best expression for describing a particular action, but also the point in the building process at which an action instruction to the user is most likely to lead to success.
17:30 - 18:00 Christian Landsiedel: Augmenting and Reasoning in Semantically Enriched Maps using Open Data
Complex robotic tasks require the use of knowledge that cannot be aquired with the sensor repertoire of a mobile, autonomous robot alone. For robots navigating in urban environments, geospatial open data repositories such as OpenStreetMap provide a source for auch knowledge. We propose the integration of a 3D metric environment representation with the semantic knowledge from such a data base, and describe an application where road network information from OpenStreetMap is used to improve road geometry information determined from laser data. This approach is evaluated on a challenging data set of the Munich inner city.

Publication of contributions

The organizers are planning to publish the contributions to this workshop in a special issue of a journal. The Journal of the Robotics Society of Japan (RSJ) has been contacted; first reactions by the editor have been positive.

Organizers

Prof. Dr.-Ing. habil. Dirk Wollherr
(Representing the area of autonomous mobile robotics.)
Chair of Automatic Control Engineering (LSR)
Technische Universität München
Theresienstraße 90/Building N5, 80333 München
Tel. +49-89-289-23401, Fax +49-89-289-28340
Email: dw@tum.de

Dr. Verena Rieser
(Representing the area of Natural Language Processing.)
School of Mathematical and Computer Sciences (MACS)
Heriot-Watt University
Edinburgh, EH14 4AS
Tel. +44 (0)131 451 4192, Fax +44 (0)131 451 3327
Email: v.t.rieser@hw.ac.uk

This Workshop is supported by the RAS Technical Committee on Cognitive Robotics (CORO).