Continuous AU intensity estimation using localized, sparse facial feature space

methodological
nonverbal behavior
machine learning
proceeding
Author

Jeni, Girard, Cohn, & De la Torre

Doi

Citation (APA 7)

Jeni, L. A., Girard, J. M., Cohn, J. F., & De la Torre, F. (2013). Continuous AU intensity estimation using localized, sparse facial feature space. Proceedings of the 10th IEEE International Conference on Automated Face & Gesture Recognition (FG), 1–7.

Abstract

Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.