Which tool to use? Grounded reasoning in everyday environments with assistant robots
Abstract
We present a cooperative reasoning agent embodied into a mobile robot enabled to explore its environment by a camera. The robot can infer missing knowledge given an action and an object by the user, like for example “I want to open a wine bottle.”. Available tools are explored by the robot and it finally recommends the most suitable one. The reasoning is on the one hand based on a static ontology that describes how to relate the actions “open” and “cut” with a fixed set of tools. On the other hand, unknown actions and tools are resolved by looking up for synonyms or super/sub-class relations in Wikidata and WordNet. By this, the robot tries to map knowledge from linked data to its internal interpretable ontology. To retrace the reasoning process, the robot is able to explain its conclusions by text to speech. Finally, we show the performance of the system based on different settings for scanning the linked data.