robot-affordances

Software for learning visual robot affordances

This project is maintained by gsaponaro

Robot Affordances

Software for learning visual robot affordances.

alt text

Introduction

This website illustrates a software framework for experiments in visual robot affordances. It is directed at the robotics, psychophysics and neuroscience communities. We provide documentation and tutorials of some practical applications.

The pipeline of the framework is as follows. 1) A visual segmentation algorithm is run on an image stream; 2) features of the segmented objects and of their constituent parts (e.g., effector and handle of tools) are extracted; 3) the features are then used for higher-level inference and for reasoning about object affordances. Complete examples are provided.

To run the first two steps, please launch this script from the POETICON++ project repository. To train an affordance knowledge base, launch affordancesExploration. Finally, to make inference queries about the learned affordance knowledge, follow the instructions in this page.

Tutorials & Documentation

Dependencies:

Installation in Linux:

git clone https://github.com/gsaponaro/robot-affordances
cd robot-affordances && mkdir build && cd build && cmake .. && make

Online documentation is available here: https://gsaponaro.github.io/robot-affordances/.

Publications

Other publications that use our framework

License

Released under the terms of the GPL v2.0 or later. See the file LICENSE for details.