Keynote Speakers


Beatrice De Gelder

Maastricht University

Virtual bodies, virtual selves with real emotions.

Abstract: Traditionally some of the core visual processes underlying social interaction abilities have been difficult to study because methodological as well as ethical issues stand in the way of recreating naturalistic interactions in the lab.  The use of avatars in combination with virtual reality-based experiments now offers a unique chance to bring the ethological dimension of human to human interaction in focus. In this talk I will report on recent experiments in our lab that have used VR in behavioral and brain imaging experiments. We will also address the issue of avatar realism and ask whether overall realism of avatars is the driver of the affective experience or whether the selective presence of certain midlevel visual features is the critical factor.

Bio:Beatrice de Gelder (PI – Maastricht University) is full Professor of Cognitive Neuroscience  in the Faculty of Psychology and Neuroscience at Maastricht University in Maastricht, The Netherlands, and a member of the Maastricht Brain Imaging Centre (M-BIC).   She is also a PI at BISS, Brightland Smart Services Campus Prior to her current assignments, she was a Senior Scientist at the Martinos Center for Biomedical Imaging, Harvard University. She received an MA in Philosophy, an MA in Experimental Psychology and a PhD in Philosophy from Louvain University in Belgium. Her current research focuses on face and body recognition. She has received various research grants and in 2012 was awarded a European Research Council (ERC) scientific grant for the study of body language and emotional body expression. Her book on “Emotions and the Body” was recently published by Oxford University Press (2016). She is/has been a member of several advisory panels of the European Commission for FET, ICT programs of FP6, FP7, ESF and ERC and outside EU, of NIH and NSF. Extensive documentation of her work can be found at



Pierre-Yves Oudeyer

INRIA - Bordeaux Sud-Ouest

Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology

Abstract: Current approaches to AI and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. What is remarkable is not that some children become world champions in certains games or specialties: it is rather their autonomy, flexibility and efficiency at learning many everyday skills under strongly limited resources of time, computation and energy. And they do not need the intervention of an engineer for each new task (e.g. they do not need someone to provide a new task specific reward function).

I will present a research program that has focused on computational modeling of child development and learning mechanisms in the last decade. I will discuss several developmental forces that guide exploration in large real world spaces, starting from the perspective of how algorithmic models can help us understand better how they work in humans, and in return how this opens new approaches to autonomous machine learning.

In particular, I will discuss models of curiosity-driven autonomous learning, enabling machines to sample and explore their own goals and their own learning strategies, self-organizing a learning curriculum without any external reward or supervision.

I will show how this has helped scientists understand better aspects of human development such as the emergence of developmental transitions between object manipulation, tool use and speech. I will also show how the use of real robotic platforms for evaluating these models has led to highly efficient unsupervised learning methods, enabling robots to discover and learn multiple skills in high-dimensions in a handful of hours. I will discuss how these techniques are now being integrated with modern deep learning methods.

Finally, I will show how these models and techniques can be successfully applied in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation. I will illustrate this with a large-scale experiment recently performed in primary schools, enabling children of all levels to improve their skills and motivation in learning aspects of mathematics. Web:

BioPierre-Yves Oudeyer is a research director at Inria and head of the FLOWERS lab at Inria and Ensta-ParisTech since 2008.  Before, he has been a permanent researcher at Sony Computer Science Laboratory for 8 years (1999-2007). 

 He studies developmental autonomous learning and the self-organization of behavioural and cognitive structures, at the frontiers of AI, machine learning, neuroscience, developmental psychology and educational technologies. In particular, he studies exploration in large open-ended spaces, with a focus on autonomous goal setting, intrinsically motivated learning, and how this can automate curriculum learning. With his team, he pioneered curiosity-driven learning algorithms working in real world robots (used in Sony Aibo robots), and showed how the same algorithms can be used to personalize sequences of learning activitivies in educational technologies deployed at large in schools. He developed theoretical frameworks to understand better human curiosity and its role in cognitive development, and contributed to build an international interdisciplinary research community on human curiosity. He also studied how machines and humans can invent, learn and evolve speech communication systems. 

He is laureate of the Inria-National Academy of Science young researcher prize in computer sciences, of an ERC Starting Grant, and of the Lifetime Achievement Award of the Evolutionary Linguistics association. Beyond academic publications and several books, he is co-author of 11 international patents. His team created the first open-source 3D printed humanoid robot for reproducible science and education (Poppy project, now widely used in schools and artistic projects), as well as a startup company. He is also working actively for the diffusion of science towards the general public, through the writing of popular science articles and participation to radio and TV programs as well as science exhibitions. 



Rachael Jack

Glasgow University

Modelling Dynamic Facial Expressions Using Psychology-Centred Data-Driven Methods

Abstract: Artificial agents are now increasingly part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, artificial agents must be equipped with essential social skills such as facial expression communication. However, many artificial agents remain limited in this ability because they are typically equipped with a narrow set of prototypical Western-centric facial expressions of emotion that lack naturalistic dynamics. Our aim is to address this challenge by equipping artificial agents with a broader repertoire of socially relevant and culturally sensitive facial expressions (e.g., complex emotions, conversational messages, social and personality traits).  To this aim, we use new, data-driven and psychology-based methodologies that can reverse-engineer dynamic facial expressions using human cultural perception. We show that our human-user-centered approach can reverse engineer many different, highly recognizable, and human-like dynamic facial expressions that typically outperform the facial expressions of existing artificial agents. By objectively analyzing these dynamic facial expression models, we can also identify specific latent syntactical signalling structures that can inform the design of generative models for culture-specific and universal social face signalling. Together, our results demonstrate the utility of an interdisciplinary approach that applies data-driven, psychology-based methods to inform the social signalling generation capabilities of artificial agents. We anticipate that these methods will broaden the usability and global marketability of artificial agents and highlight the key role that psychology must continue to play in their design

Bio: Rachael Jack is a Reader (Associate Prof) at the Institute of Neuroscience & Psychology, University of Glasgow. Her research has produced significant advances in understanding facial expression of emotion within and across cultures using a novel interdisciplinary approach that combines psychophysics, social psychology, dynamic 3D computer graphics, and mathematical psychology. Most notably, she has revealed cultural specificities in facial expressions of emotion; that four, not six, expressive patterns are common across cultures; and that facial expressions transmit information in a hierarchical structure over time. Together, Jack’s work has challenged the dominant view that six basic facial expressions of emotion are universal, which has led to a new theoretical framework of facial expression communication that she is now transferring to digital agents to synthesize culturally sensitive social avatars and robots. Jack’s work has featured in several high-profile scientific outlets (e.g., Annual Review of Psychology, Current Biology, Psychological Science, PNAS, TICS). Sheis currently funded by the European Research Council (ERC) to lead the research program Computing the Face Syntax of Social Face Signals, which willdeliver a formal model of human social face signalling with transference to social robotics. Jack is recipient of the American Psychological Association (APA) New Investigator award, the Social and Affective Neuroscience Society (SANS) Innovation award, and the British Psychological Society (BPS) Spearman Medal. She is also Associate Editor at Journal of Experimental Psychology: General, and committee member for the conferences of the Society for Affective Sciences (SAS), IEEE Automatic Face & Gesture Recognition, and the Vision Science Society (VSS




Verena Rieser

Heriot-Watt University, Edinburgh

Let's chat! Can virtual agents learn how to have a conversation?

Abstract: Intelligent virtual agents frequently engage the user in conversation. The underlying technology - often referred to as spoken dialogue systems - have experienced a revolution over the past decade, moving from being completely handcrafted to using data-driven machine learning methods.

In this talk, I will review current developments including my work on using reinforcement learning and deep learning models, and evaluate these methods in the light of recent results from two large-scale studies: First, I will summarise results from the End-to-End NLG Challenge for presenting information in closed-domain, task-based dialogue systems. Second, I will report our experience from experimenting with these models for generating responses in open-domain social dialogue as part of the Amazon Alexa Prize challenge.

Bio: Verena Rieser is a Professor in Computer Science at Heriot-Watt University, Edinburgh, where she is leads the NLP lab and is affiliated with the Interaction Lab and the Edinburgh Center for Robotics. Verena first trained as a linguist and then did a MSc in Artificial Intelligence at the University of Edinburgh, where she also worked as a postdoctoral researcher (2008-11). Verena holds a PhD from Saarland University (2008) for which she received the Eduard-Martin Prize for outstanding research. Her research focuses on machine learning techniques for spoken dialogue systems and language generation, where she has authored almost 100 peer-reviewed papers. For the past two years,  Verena and her group were the only UK team to make it through to the finals of the Amazon Alexa Prize.

Online user: 1