ENS, salle L363/365, 24 rue Lhomond, 75005 paris
Interacting embodied agents, be they groups of adult humans engaged in a coordinated task, autonomous robots acting in an environment, or a mother teaching a child, must seamlessly coordinate their actions to achieve a collaborative goal. Inter-agent coordination depends crucially on external behaviors by the participants where the behavior of one participant organizes the actions of the other in real time. In this talk, I will review a set of studies using a novel experimental paradigm in which we collected high-density multimodal behavioral data (including eye tracking, motion tracking, audio and video) in parent-child interactions. We compared and analyzed the dynamic structure of free-flowing parent-child interactions in the context of language learning, and discovered the characteristics of multimodal behaviors from parents and children, that are informatively time locked to words and their intended referents and predictive of word learning. I will conclude by discussing how high-density micro-level behaviors data lead to tangible opportunities in understanding human cognitive systems and in building artificial intelligence systems.