Towards a theoretical framework for learning multi-modal patterns for embodied agents

Noceti N, Caputo B, Baldassarre L, Barla A, Rosasco L, Odone F, Sandini G, Castellini C (2009)


Publication Type: Conference contribution

Publication year: 2009

Journal

Book Volume: 5716 LNCS

Pages Range: 239-248

Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Event location: ITA

ISBN: 3642041450

DOI: 10.1007/978-3-642-04146-4_27

Abstract

Multi-modality is a fundamental feature that characterizes biological systems and lets them achieve high robustness in understanding skills while coping with uncertainty. Relatively recent studies showed that multi-modal learning is a potentially effective add-on to artificial systems, allowing the transfer of information from one modality to another. In this paper we propose a general architecture for jointly learning visual and motion patterns: by means of regression theory we model a mapping between the two sensorial modalities improving the performance of artificial perceptive systems. We present promising results on a case study of grasp classification in a controlled setting and discuss future developments. © 2009 Springer Berlin Heidelberg.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Noceti, N., Caputo, B., Baldassarre, L., Barla, A., Rosasco, L., Odone, F.,... Castellini, C. (2009). Towards a theoretical framework for learning multi-modal patterns for embodied agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 239-248). ITA.

MLA:

Noceti, Nicoletta, et al. "Towards a theoretical framework for learning multi-modal patterns for embodied agents." Proceedings of the 15th International Conference on Image Analysis and Processing - ICIAP 2009, Proceedings, ITA 2009. 239-248.

BibTeX: Download