To rate or not to rate: Investigating evaluation methods for generated co-speech gestures

methodological
observational
proceeding
Author

Wolfert, Girard, Kucherenko, & Belpaeme

Doi

Citation (APA 7)

Wolfert, P., Girard, J. M., Kucherenko, T., & Belpaeme, T. (2021). To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures. In Proceedings of the 23rd International Conference on Multimodal Interaction (pp. 494–502). Association for Computing Machinery.

Abstract

While automatic performance metrics are crucial for machine learning of artificial human-like behaviour, the gold standard for evaluation remains human judgement. The subjective evaluation of artificial human-like behaviour in embodied conversational agents is however expensive and little is known about the quality of the data it returns. Two approaches to subjective evaluation can be largely distinguished, one relying on ratings, the other on pairwise comparisons. In this study we use co-speech gestures to compare the two against each other and answer questions about their appropriateness for evaluation of artificial behaviour. We consider their ability to rate quality, but also aspects pertaining to the effort of use and the time required to collect subjective data.We use crowd sourcing to rate the quality of co-speech gestures in avatars, assessing which method picks up more detail in subjective assessments. We compared gestures generated by three different machine learning models with various level of behavioural quality. We found that both approaches were able to rank the videos according to quality and that the ranking significantly correlated, showing that in terms of quality there is no preference of one method over the other. We also found that pairwise comparisons were slightly faster and came with improved inter-rater reliability, suggesting that for small-scale studies pairwise comparisons are to be favoured over ratings.