October 27, 2015 10:00 - 11:00
BSI Central Building 1F Seminar Room
Humans are able to perform a wide variety of complex actions manipulating a very large number of objects. We can make predictions on the outcome of our actions and on how to use different objects. Hence, we have excellent action & object understanding. Artificial agents, on the other hand, still miserably fail in this respect. It is particularly puzzling how inexperienced, young humans can acquire such knowledge; bootstrapped by exploration and extended by supervision. In this study we have, therefore, addressed the question how to structure the realm of actions and objects into dynamic representations, which allow for the easy learning of different action and object concepts. Performing different manipulation actions on a table top (e.g. the actions of “making a breakfast”), we show with our robots that this will indeed lead to some kind of implicit (un-reflected) understanding of action and object concepts allowing the agent to generalize actions and redefine object uses according to need.
- Open to Public
- Taro Toyoizumi [Taro Toyoizumi, Neural Computation and Adaptation ]
Name: Reiko Kiyotaki