Toward Multimodal Modeling of Emotional Expressiveness

substantive
observational
machine learning
verbal behavior
nonverbal behavior
emotion
proceeding
Author

Lin, Girard, Sayette, & Morency

Doi

Citation (APA 7)

Lin, V., Girard, J. M., Sayette, M. A., & Morency, L.-P. (2020). Toward Multimodal Modeling of Emotional Expressiveness. Proceedings of the 22nd International Conference on Multimodal Interaction, 548–557.

Abstract

Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual,linguistic, and multimodal behavioral signals? Second, how important is each behavioral modality to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset (RMSE=0.65,R2=0.45,r=0.74). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally,examination of our interpretable models’ coefficients reveals a number of visual and linguistic behavioral signals—such as facial action unit intensity, overall word count, and use of words related to social processes—that reliably predict emotional expressiveness.

Awards

This paper was nominated for Best Paper at ICMI 2020.