This page lists all papers published by Dr. Girard, organized by year of publication. Click a paper’s title to visit the publisher page and access the official version. Use Abstract to reveal the summary, Citation for the APA 7th-edition reference, and BibTeX to view the BibTeX entry. Clicking the link a second time will close the display again. Preprint links to a free author version (not the final typeset copy), and Materials leads to an open repository with data, code, and other open-science resources.

2025

Caumiant, Kang, Girard & Fairbairn
Psychology of Addictive Behaviors
Copied!
Objective: Emotion measurement is central to capturing acute alcohol reinforcement and so to informing models of alcohol use disorder etiology. Yet our understanding of how alcohol impacts emotion as assessed across diverse response modalities remains incomplete. The present study leverages a social alcohol-administration paradigm to assess drinking-related emotions, aiming to elucidate impacts of intoxication on self-reported versus behaviorally expressed emotion. Method: Participants (N = 60; Mage = 22.5; 50\% male; 55\% White) attended two counterbalanced laboratory sessions, on one of which they were administered an alcoholic beverage (target blood alcohol content .08\%) and on the other a nonalcoholic control beverage. Participants in both conditions were accurately informed of beverage contents and consumed study beverages in assigned groups of three while their behavior was videotaped. Emotion was assessed via self-report as well as continuous coding of facial muscle movements. Results: The relationship between self-reported and behaviorally expressed emotion diverged significantly across beverage conditions: positive affect: b = -0.174, t = -2.36, p = .022; negative affect, b = 0.4319, t = 2.37, p = .021. Specifically, self-reports and behavioral displays converged among sober but not intoxicated participants. Further, alcohol's effects on positive facial displays remained significant in models controlling for self-reported positive and negative emotion, with alcohol enhancing Duchenne smiles 20\% beyond effects captured via self-reports, pointing to unique effects of alcohol on behavioral indicators of positive emotion. Conclusions: Findings highlight effects of acute intoxication on the convergence and divergence of emotion measures, thus informing our understanding of measures for capturing emotions that are most proximal to drinking and thus most immediately reinforcing of alcohol consumption. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
@article{caumiant2025, title = {Alcohol and Emotion: Analyzing Convergence between Facially Expressed and Self-Reported Indices of Emotion under Alcohol Intoxication}, author = {Eddie P. Caumiant and Dahyeon Kang and Jeffrey M. Girard and Catharine E. Fairbairn}, year = {2025}, journal = {Psychology of Addictive Behaviors}, doi = {10.1037/adb0001053} }
Girard, Yermol, Salah & Cohn
Annual Review of Clinical Psychology
Copied!
Clinical psychological assessment often relies on self-report, interviews, and behavioral observation, methods that pose challenges for reliability, validity, and scalability. Computational approaches offer new opportunities to analyze expressive behavior (e.g., facial expressions, vocal prosody, and language use) with greater precision and efficiency. This paper provides an accessible conceptual framework for understanding how methods from computer vision, speech signal processing, and natural language processing can enhance clinical assessment. We outline the goals, frameworks, and methods of both clinical and computational approaches, and present an illustrative review of interdisciplinary research applying these techniques across a range of mental health conditions. We also examine key challenges related to data quality, measurement, interdisciplinarity, and ethics. Finally, we highlight future directions for building systems that are robust, interpretable, and clinically meaningful. This review is intended to support dialogue between clinical and computational communities and to guide ongoing research and development at their intersection.
@article{girard2025, title = {Computational Analysis of Expressive Behavior in Clinical Assessment}, author = {Jeffrey M Girard and Dasha A Yermol and Albert Ali Salah and Jeffrey F Cohn}, year = {2025}, journal = {Annual Review of Clinical Psychology}, doi = {10.1146/annurev-clinpsy-081423-024140} }
Rincon Caicedo, Girard, Punt, Giovanetti & Ilardi
Journal of Latinx Psychology
Copied!
As with many other racial and ethnic minorities, Hispanic Americans face substantial disparities in health care access and disease prevalence. The published literature on mental health disorders among Hispanic individuals, however, is not robust, and their experience of depressive disorders remains poorly understood. The construct of acculturation may help elucidate the risk of depression among Hispanic Americans and may inform the development of appropriate policy and treatment resources. We examined the degree to which acculturation may interact with key demographic variables (sex, age, socioeconomic status [SES], and Mexican ancestry) in accounting for depressive symptomatology among Hispanic Americans. We conducted a series of Bayesian generalized linear mixed models using data from the National Health and Nutrition Examination Survey to investigate the self-reported depressive symptomatology (measured by the Patient Health Questionnaire-9) of Mexican Americans and other Hispanic individuals and to examine possible effects of acculturation and demographics on depressive symptomatology in this population. Mexican Americans had substantially lower levels of depression than other Hispanic individuals. Acculturation was positively associated with depression severity, but this effect was moderated by sex and SES. High acculturation was more strongly linked to depression among men and those of high SES. Acculturation and several demographic factors were associated with depressive symptomatology among Hispanic individuals. Acculturation can be useful in understanding risk, developing culturally informed interventions, and implementing community-level changes to address the burden of Hispanic depression. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
@article{rinconcaicedo2025, title = {Depressive Symptoms among Hispanic Americans: Investigating the Interplay of Acculturation and Demographics}, author = {Mariana {Rincon Caicedo} and Jeffrey M. Girard and Stephanie E. Punt and Annaleis K. Giovanetti and Stephen S. Ilardi}, year = {2025}, journal = {Journal of Latinx Psychology}, volume = {13}, number = {1}, pages = {68--84}, doi = {10.1037/lat0000266} }
Girard, Yermol, Bylsma, Cohn, Fournier, Morency & Swartz
Journal of Consulting and Clinical Psychology
Copied!
NA
@article{girard2025a, title = {Dynamic and Dyadic Relationships between Facial Behavior, Working Alliance, and Treatment Outcomes during Depression Therapy}, author = {Jeffrey M Girard and Dasha A Yermol and Lauren M Bylsma and Jeffrey F Cohn and Jay C Fournier and Louis-Philippe Morency and Holly A Swartz}, year = {2025}, journal = {Journal of Consulting and Clinical Psychology} }
Campbell, Girard, McDuff & Rosengren
Journal of Interactive Marketing
Copied!
Most research on influencers takes the perspective of marketers, examining how influencers can be leveraged to build brands. However, as influencers grow in popularity, they are becoming a marketing force in their own right, warranting deeper exploration from their perspective. This paper shifts the focus to influencers as active marketers, using a multilevel mixed-effects growth model to identify factors associated with follower growth. Analyzing a dataset of 14,311,145 pieces of Instagram content posted over more than two years from 6,079 influencers across 57 countries, we examine how attention labor (content strategy and persona appeal) and relationship labor (captioning strategy and ecosystem connectivity) relate to follower growth. Whereas existing research primarily focuses on engagement with specific pieces of content, this study takes a broader approach, investigating how influencers' content and engagement strategies are associated with differences in follower growth over time. By reframing influencers as marketers rather than media vehicles, this study contributes to marketing theory and provides insights for influencers, influencer agencies, and marketers who seek to better understand influencer growth.
@article{campbell2025, title = {{{EXPRESS}}: {{How}} Influencers Grow: An Empirical Study and Future Research Agenda}, author = {Colin Campbell and Jeffrey M. Girard and Daniel McDuff and Sara Rosengren}, year = {2025}, journal = {Journal of Interactive Marketing}, pages = {10949968251360683}, doi = {10.1177/10949968251360683}, note = {Last visited on 09/03/2025} }
Aafjes-van Doorn & Girard
Psychotherapy Research
Copied!
This special section underscores the potential of multimodal measurement approaches to transform psychotherapy research. A multimodal approach provides a more comprehensive understanding than any single modality (type of collected information) can provide on its own. Traditionally, clinicians and researchers have relied on their intuition, experience, and training to integrate different types of information in a psychotherapy session/treatment. Increasingly, however, computational methods offer a complementary alternative, enabling more automated, data-driven, and reproducible solutions. The six empirical examples in this special section illustrate the emerging---and often interdisciplinary---methodologies, including text, audio, video, and physiological measures, that are relevant in the psychotherapy setting. While each study addressed distinct research questions and employed unique methodologies, they all demonstrated a commitment to leveraging multimodal measurement and tackling the challenges of integrating diverse data sources.
@article{aafjes-vandoorn2025, title = {From Intuition to Innovation: Empirical Illustrations of Multimodal Measurement in Psychotherapy Research}, author = {Katie {Aafjes-van Doorn} and Jeffrey M. Girard}, year = {2025}, journal = {Psychotherapy Research}, volume = {35}, number = {2}, pages = {171--173}, doi = {10/g824tc} }
Edershile, Girard, Woods, Williams, Simms & Wright
Assessment
Copied!
The construct of narcissism can be conceptualized very differently depending on the psychological literature. The social-personality conceptualization of narcissism often emphasizes high self-esteem as well as a range of associated maladaptive and adaptive outcomes. The clinical literature focuses on the pathological aspects of narcissism and highlights maladaptive aspects that correspond to the relationship between narcissistic grandiosity and narcissistic vulnerability. Reflecting these varying views of narcissism, many measures have become popular in the assessment of the construct, each with varying interpersonal characterizations. The current study (N\,=\,1,111) evaluated the interpersonal profiles captured by popular measures of narcissism and examined whether measures capture overlapping, differentiated, and/or intended interpersonal styles. Results revealed that measures of narcissism capture a wide range of interpersonal styles, from warm/dominant to submissive. However, most measures emphasize the role of interpersonal dominance in the measure content. Viewing narcissism from a three-factor structure, including narcissistic agency, antagonism, and vulnerability, helps to integrate the wide range of interpersonal styles apparent across narcissism measures. Furthermore, the level of (mal)adaptivity and general interpersonal style somewhat maps onto the literature of origin for the scales. Implications for measurement selection in the assessment of narcissism are discussed.
@article{edershile2025, title = {Narcissism from Every Angle: An Interpersonal Analysis of Narcissism in Young Adults}, author = {Elizabeth A. Edershile and Jeffrey M. Girard and William C. Woods and Trevor F. Williams and Leonard J. Simms and Aidan G. C. Wright}, year = {2025}, journal = {Assessment}, pages = {10731911251356150}, doi = {10.1177/10731911251356150}, note = {Last visited on 09/03/2025} }
Agrawal, Akinyemi, Alvero, Behrooz, Buffalini, Carlucci, Chen, Chen, Chen, Cheng, Chowdary, Chuang, D'Avirro, Daly, Dong, Duppenthaler, Gao, Girard, Gleize, Gomez, Gong, Govindarajan, Han, He, Hernandez, Hristov, Huang, Inaguma, Jain, Janardhan, Jia, Klaiber, Kovachev, Kumar, Li, Li, Litvin, Liu, Ma, Ma, Ma, Ma, Mantovani, Miglani, Mohan, Morency, Ng, Ng, Nguyen, Oberai, Peloquin, Pino, Popovic, Poursaeed, Prada, Rakotoarison, Ranjan, Richard, Ropers, Saleem, Sharma, Shcherbyna, Shen, Shen, Stathopoulos, Sun, Tomasello, Tran, Turkatenko, Wan, Wang, Wang, Williamson, Wood, Xiang, Yang, Yao, Zhang, Zhang, Zhang, Zheng, Zhyzheria, Zikes & Zollhoefer
arXiv:2506.22554 [cs.CV]
Copied!
Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals. To develop socially intelligent AI technologies, it is crucial to develop models that can both comprehend and generate dyadic behavioral dynamics. To this end, we introduce the Seamless Interaction Dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage from over 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand dyadic embodied dynamics, unlocking breakthroughs in virtual agents, telepresence experiences, and multimodal content analysis tools. We also develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech. These models can take as input both the speech and visual behavior of their interlocutors. We present a variant with speech from an LLM model and integrations with 2D and 3D rendering methods, bringing us closer to interactive virtual agents. Additionally, we describe controllable variants of our motion models that can adapt emotional responses and expressivity levels, as well as generating more semantically-relevant gestures. Finally, we discuss methods for assessing the quality of these dyadic motion models, which are demonstrating the potential for more intuitive and responsive human-AI interactions.
@misc{agrawal2025, title = {Seamless Interaction: {{Dyadic}} Audiovisual Motion Modeling and Large-Scale Dataset}, author = {Vasu Agrawal and Akinniyi Akinyemi and Kathryn Alvero and Morteza Behrooz and Julia Buffalini and Fabio Maria Carlucci and Joy Chen and Junming Chen and Zhang Chen and Shiyang Cheng and Praveen Chowdary and Joe Chuang and Antony D'Avirro and Jon Daly and Ning Dong and Mark Duppenthaler and Cynthia Gao and Jeff Girard and Martin Gleize and Sahir Gomez and Hongyu Gong and Srivathsan Govindarajan and Brandon Han and Sen He and Denise Hernandez and Yordan Hristov and Rongjie Huang and Hirofumi Inaguma and Somya Jain and Raj Janardhan and Qingyao Jia and Christopher Klaiber and Dejan Kovachev and Moneish Kumar and Hang Li and Yilei Li and Pavel Litvin and Wei Liu and Guangyao Ma and Jing Ma and Martin Ma and Xutai Ma and Lucas Mantovani and Sagar Miglani and Sreyas Mohan and Louis-Philippe Morency and Evonne Ng and Kam-Woh Ng and Tu Anh Nguyen and Amia Oberai and Benjamin Peloquin and Juan Pino and Jovan Popovic and Omid Poursaeed and Fabian Prada and Alice Rakotoarison and Rakesh Ranjan and Alexander Richard and Christophe Ropers and Safiyyah Saleem and Vasu Sharma and Alex Shcherbyna and Jia Shen and Jie Shen and Anastasis Stathopoulos and Anna Sun and Paden Tomasello and Tuan Tran and Arina Turkatenko and Bo Wan and Chao Wang and Jeff Wang and Mary Williamson and Carleigh Wood and Tao Xiang and Yilin Yang and Julien Yao and Chen Zhang and Jiemin Zhang and Xinyue Zhang and Jason Zheng and Pavlo Zhyzheria and Jan Zikes and Michael Zollhoefer}, year = {2025}, publisher = {arXiv:2506.22554 [cs.CV]} }
Creswell, Wright, Sayette, Girard, Lyons & Smyth
Clinical Psychological Science
Copied!
Young adults typically drink socially, yet most lab studies testing alcohol responses have administered alcohol in isolation. This is the first study to examine alcohol responses and social reward in a group setting among a young-adult at-risk sample. Heavy-drinking young adults (N = 393; 50\% female) were grouped in threes and drank a moderate dose of alcohol or a placebo beverage. These social interactions were recorded, and the duration and sequence of facial expressions, speech, and laughter were coded. Results revealed a comprehensive, multimodal, positive effect of alcohol on socioemotional experiences across self-report (e.g., increased positive affect and social bonding, greater relief of unpleasant feelings) and behavioral outcomes at both the individual (e.g., more rapid increases in Duchenne smiling) and group levels (e.g., more three-way conversations). Findings underscore the potential for group-formation paradigms to yield valuable data regarding etiological mechanisms underlying alcohol use disorder. All data and code are available (https://osf.io/3q42z/).
@article{creswell2025, title = {The Effects of Alcohol in Groups of Heavy-Drinking Young Adults: A Multimodal Investigation of Alcohol Responses in a Laboratory Social Setting}, author = {Kasey G. Creswell and Aidan G. C. Wright and Michael A. Sayette and Jeffrey M. Girard and Greta Lyons and Joshua M. Smyth}, year = {2025}, journal = {Clinical Psychological Science}, pages = {21677026251333784}, doi = {10.1177/21677026251333784}, note = {Last visited on 09/03/2025} }
Jun, Girard, Martin & Fazzino
Eating Behaviors
Copied!
Objective Hyper-palatable foods (HPF) contain nutrient combinations that are hypothesized to maximize their rewarding effects during consumption. Due to their strong reinforcing properties, HPF are hypothesized to lead to greater energy intake within a meal. However, this premise has not been tested in free-living conditions. The current study examined the association between within-meal HPF intake and 1) measured energy intake and 2) self-reported overeating, assessed within eating occasions using smartphone-based food photography methodology. Methods A total of 29 participants reported food intake and eating experiences (N=345 total eating occasions) in real-time for 4~days using smartphone-based food photography methodology. HPF were identified using a standardized definition. Bayesian multilevel modeling was conducted to investigate the within-person effects of proportional calorie intake from HPF (\%kcal from HPF) on total energy intake and subjective overeating. Pre-meal hunger and proportional energy intake from high energy dense (HED) foods were included as covariates. Results Results revealed that when participants consumed more \%kcal from HPF than their average, they consumed greater total energy during eating occasions, even when controlling for pre-meal hunger and \%kcal from HED foods (median {$\beta~$}=~0.09, 95\% HDI [0.02, 0.16], pd.~=~99.56\%). Additionally, consuming more \%kcal from HPF than average was associated with greater eating despite feeling full, when controlling covariates (median {$\beta~$}=~0.15, 95\% HDI [-0.02, 0.34], pd~=~96.45\%). Conclusions The findings supported the premise that HPF themselves may yield greater energy intake and eating despite satiation, measured in real-time and free-living conditions.
@article{jun2025, title = {The Role of Hyper-Palatable Foods in Energy Intake Measured Using Mobile Food Photography Methodology}, author = {Daiil Jun and Jeffrey M. Girard and Corby K. Martin and Tera L. Fazzino}, year = {2025}, journal = {Eating Behaviors}, volume = {57}, pages = {101983}, doi = {10.1016/j.eatbeh.2025.101983}, note = {Last visited on 09/03/2025} }
Chung, Girard, Ravichandran, Öngür, Cohen & Baker
Journal of Psychopathology and Clinical Science
Copied!
Prevailing factor models of psychosis are centered on schizophrenia-related disorders defined by the Diagnostic and Statistical Manual of Mental Disorders and International Classification of Diseases, restricting generalizability to other clinical presentations featuring psychosis, even though affective psychoses are more common. This study aims to bridge this gap by conducting exploratory and confirmatory factor analyses, utilizing clinical ratings collected from patients with either affective or nonaffective psychoses ( n = 1,042). Drawing from established clinical instruments, such as the Positive and Negative Syndrome Scale, Young Mania Rating Scale, and Montgomery-{\AA}sberg Depression Rating Scale, a broad spectrum of core psychotic symptoms was considered for the model development. Among the candidate models considered, including correlated factors and multifactor models, a model with seven correlated factors encompassing positive symptoms, negative symptoms, depression, mania, disorganization, hostility, and anxiety was most interpretable with acceptable fit. The seven factors exhibited expected associations with external validators, were replicable through cross-validation, and were generalizable across affective and nonaffective psychoses. (PsycInfo Database Record (c) 2024 APA, all rights reserved) (Source: journal abstract) The aim of this study is to formulate a transdiagnostic dimensional model by integrating well-established clinical ratings that encompass a wide range of core symptoms observed in both affective and nonaffective psychoses. We demonstrate that a multidimensional symptom model representing both psychotic (positive symptoms, negative symptoms, and disorganization) and mood symptoms (depression, mania, hostility, and anxiety) is most interpretable and applicable across psychotic diagnoses. Considering the complex interrelationships among identified symptom dimensions, a dimensional approach may be more suitable for characterizing the symptom profiles of psychotic patients than a categorical diagnostic approach. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
@article{chung2025, title = {Transdiagnostic Modeling of Clinician-Rated Symptoms in Affective and Nonaffective Psychotic Disorders}, author = {Yoonho Chung and Jeffrey M. Girard and Caitlin Ravichandran and Dost {\"O}ng{\"u}r and Bruce M. Cohen and Justin T. Baker}, year = {2025}, journal = {Journal of Psychopathology and Clinical Science}, volume = {134}, number = {1}, pages = {81--96}, doi = {10/g8pwmn}, note = {Last visited on 10/31/2024} }
Adaryukov, Biernat, Girard, Villicana & Pleskac
Decision
Copied!
In graduate admissions, as in many multiattribute decisions, evaluators must judge candidates from a flood of information, including recommendation letters, personal statements, grades, and standardized test scores. Some of this information is structured, while some is unstructured. Yet most studies of multiattribute decisions focus on decisions made from structured information. This study evaluated how structured and unstructured information is used within graduate admissions decisions. We examined a uniquely comprehensive data set of N = 2,231 graduate applications to the University of Kansas, containing full application packages, demographics, and final admissions decisions for each applicant. To make sense of our documents, we applied structural topic modeling (STM), a topic model that allows topic content and prevalence to covary based on other metadata (e.g., department of study). STM allowed us to examine what information the letters and statements contain and the relationships between variables like gender and race and the textual information. We found that most topics in the unstructured data related to specific fields of study. The STMs did not uncover strong differences among applicants regarding race and gender, though the recommendation letters and personal statements for international applicants did show some different topic profiles than domestic applicants. We also found that admissions decision makers behaved as if they prioritized structured numeric metrics, using unstructured information to check for disqualifications, if at all. However, we found that topics were less reliable than admissions documents, meaning that additional ways of using them cannot be completely ruled out. The implications of our findings on graduate admissions decisions are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
@article{adaryukov2025, title = {Worth the Weight: An Examination of Unstructured and Structured Data in Graduate Admissions}, author = {James Adaryukov and Monica Biernat and Jeffrey M. Girard and Adrian J. Villicana and Timothy J. Pleskac}, year = {2025}, journal = {Decision}, volume = {12}, number = {1}, pages = {4--30}, doi = {10.1037/dec0000251} }

2024

Butler, Christian, Girard, Vanzhula & Levinson
Behaviour Research and Therapy
Copied!
Objective Imaginal exposure is a novel intervention for eating disorders (EDs) that has been investigated as a method for targeting ED symptoms and fears. Research is needed to understand mechanisms of change during imaginal exposure for EDs, including whether within- and between-session distress reduction is related to treatment outcomes. Method Study 1 tested four sessions of online imaginal exposure (N~=~143). Study 2 examined combined imaginal and in vivo exposure, comprising six imaginal exposure sessions (N~=~26). ED symptoms and fears were assessed pre- and posttreatment, and subjective distress and state anxiety were collected during sessions. Results Subjective distress tended to increase within-session in both studies, and within-session reduction was not associated with change in ED symptoms or fears. In Study 1, between-session reduction of distress and state anxiety was associated with greater decreases in ED symptoms and fears pre-to posttreatment. In Study 2, between-session distress reduction occurred but was not related to outcomes. Conclusions Within-session distress reduction may not promote change during exposure for EDs, whereas between-session distress reduction may be associated with better treatment outcomes. These findings corroborate research on distress reduction during exposure for anxiety disorders. Clinicians might consider approaches to exposure-based treatment that focus on distress tolerance and promote between-session distress reduction.
@article{butler2024, title = {Are Within- and between-Session Changes in Distress Associated with Treatment Outcomes? {{Findings}} from Two Clinical Trials of Exposure for Eating Disorders}, author = {Rachel M. Butler and Caroline Christian and Jeffrey M. Girard and Irina A. Vanzhula and Cheri A. Levinson}, year = {2024}, journal = {Behaviour Research and Therapy}, volume = {180}, pages = {104577}, doi = {10.1016/j.brat.2024.104577}, note = {Last visited on 09/03/2025} }
Sprunger, Girard & Chard
Journal of Traumatic Stress
Copied!
Dimensional conceptualizations of psychopathology hold promise for understanding the high rates of comorbidity with posttraumatic stress disorder (PTSD). Linking PTSD symptoms to transdiagnostic dimensions of psychopathology may enable researchers and clinicians to understand the patterns and breadth of behavioral sequelae following traumatic experiences that may be shared with other psychiatric disorders. To explore this premise, we recruited a trauma-exposed online community sample (N = 462) and measured dimensional transdiagnostic traits of psychopathology using parceled facets derived from the Personality Inventory for DSM-5 Faceted--Short Form. PTSD symptom factors were measured using the PTSD Checklist for DSM-5 and derived using confirmatory factor analysis according to the seven-factor hybrid model (i.e., Intrusions, Avoidance, Negative Affect, Anhedonia, Externalizing Behaviors, Anxious Arousal, And Dysphoric Arousal). We observed hypothesized associations between PTSD factors and transdiagnostic traits indicating that some transdiagnostic dimensions were associated with nearly all PTSD symptom factors (e.g., emotional lability: rmean = .35), whereas others showed more unique relationships (e.g., hostility--Externalizing Behavior: r = .60; hostility with other PTSD factors: rs = .12--.31). All PTSD factors were correlated with traits beyond those that would appear to be construct-relevant, suggesting the possibility of indirect associations that should be explicated in future research. The results indicate the breadth of trait-like consequences associated with PTSD symptom exacerbation, with implications for case conceptualization and treatment planning. Although PTSD is not a personality disorder, the findings indicate that increased PTSD factor severity is moderately associated with different patterns of trait-like disruptions in many areas of functioning.
@article{sprunger2024, title = {Associations between Transdiagnostic Traits of Psychopathology and Hybrid Posttraumatic Stress Disorder Factors in a Trauma-Exposed Community Sample}, author = {Joel G. Sprunger and Jeffrey M. Girard and Kathleen M. Chard}, year = {2024}, journal = {Journal of Traumatic Stress}, volume = {37}, number = {3}, pages = {384--396}, doi = {10.1002/jts.23023}, note = {Last visited on 03/02/2024} }
Kebe, Birlikci, Boudin, Ishii, Girard & Morency
Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents
Copied!
NA
@inproceedings{kebe2024, title = {{{GeSTICS}}: A Multimodal Corpus for Studying Gesture Synthesis in Two-Party Interactions with Contextualized Speech}, booktitle = {Proceedings of the 24th {{ACM International Conference}} on {{Intelligent Virtual Agents}}}, author = {Gaoussou Youssouf Kebe and Mehmet Deniz Birlikci and Auriane Boudin and Ryo Ishii and Jeffrey M. Girard and Louis-Philippe Morency}, year = {2024}, month = {sep}, pages = {1--10}, publisher = {ACM}, address = {GLASGOW United Kingdom}, doi = {10.1145/3652988.3673917}, note = {Last visited on 07/02/2025} }
Baber, Hamilton, Girard, Cohen, Gratton, Ellis & Hemmer
Sleep
Copied!
NA
@article{baber2024, title = {It's the Sentiment That Counts: Comparing Sentiment Analysis Tools for Estimating Affective Valence in Dream Reports}, author = {Garrett R Baber and Nancy A Hamilton and Jeffrey M Girard and Jamie M Cohen and Matthew K P Gratton and Samantha Ellis and Eliza Hemmer}, year = {2024}, journal = {Sleep}, volume = {47}, number = {12}, pages = {zsae210}, doi = {10.1093/sleep/zsae210}, note = {Last visited on 09/20/2024} }
L'Insalata, Girard & Fazzino
International Journal of Environmental Research and Public Health
Copied!
Research supports the premise that greater substance use is associated with fewer sources of environmental reinforcement. However, it remains unclear whether types of environmental reinforcement (e.g., social or work) may differentially influence use. This study tested the association between types of environmental reinforcement and engagement in multiple health risk behaviors (alcohol use, binge eating, and nicotine use). Cross-sectional data were collected from a general population sample of US adults (N = 596). The Pleasant Events Schedule (PES) was used to measure sources of reinforcement. Exploratory structural equation modeling (ESEM) characterized different areas of environmental reinforcement and correlations with alcohol consumption, binge eating, and nicotine use. A four-factor structure of the PES demonstrated a conceptually cohesive model with acceptable fit and partial strict invariance. Social-related reinforcement was positively associated with alcohol consumption ({$\beta$} = 0.30, p {$<$} 0.001) and binge eating ({$\beta$} = 0.26, p {$<$} 0.001). Work/school-related reinforcement was negatively associated with binge eating ({$\beta$} = -0.14, p = 0.006). No areas of reinforcement were significantly associated with nicotine use (p values = 0.069 to 0.755). Social-related activities may be associated with engagement in multiple health risk behaviors (more binge eating and alcohol use), whereas work/school-related activities may be preventative against binge eating. Understanding these relationships can inform prevention efforts targeting health risk behaviors.
@article{linsalata2024, title = {Sources of Environmental Reinforcement and Engagement in Health Risk Behaviors among a General Population Sample of {{US}} Adults}, author = {Alexa M. L'Insalata and Jeffrey M. Girard and Tera L. Fazzino}, year = {2024}, month = {nov}, journal = {International Journal of Environmental Research and Public Health}, volume = {21}, number = {11}, pages = {1390}, publisher = {Multidisciplinary Digital Publishing Institute}, doi = {10.3390/ijerph21111390}, note = {Last visited on 09/03/2025} }

2023

Girard, Tie & Liebenthal
Proceedings of the 11th International Conference on Affective Computing and Intelligent Interaction (ACII)
Copied!
In this paper, we describe the design, collection, and validation of a new video database that includes holistic and dynamic emotion ratings from 83 participants watching 22 affective movie clips. In contrast to previous work in Affective Computing, which pursued a single ``ground truth'' label for the affective content of each moment of each video (e.g., by averaging the ratings of 2 to 7 trained participants), we embrace the subjectivity inherent to emotional experiences and provide the full distribution of all participants' ratings (with an average of 76.7 raters per video). We argue that this choice represents a paradigm shift with the potential to unlock new research directions, generate new hypotheses, and inspire novel methods in the Affective Computing community. We also describe several interdisciplinary use cases for the database: to provide dynamic norms for emotion elicitation studies (e.g., in psychology, medicine, and neuroscience), to train and test affective content analysis algorithms (e.g., for dynamic emotion recognition, video summarization, and movie recommendation), and to study subjectivity in emotional reactions (e.g., to identify moments of emotional ambiguity or ambivalence within movies, identify predictors of subjectivity, and develop personalized affective content analysis algorithms). The database is made freely available to researchers for noncommercial use at https://dynamos.mgb.org.
@inproceedings{girard2023, title = {{{DynAMoS}}: The Dynamic Affective Movie Clip Database for Subjectivity Analysis}, booktitle = {Proceedings of the 11th {{International Conference}} on {{Affective Computing}} and {{Intelligent Interaction}} ({{ACII}})}, author = {Jeffrey M. Girard and Yanmei Tie and Einat Liebenthal}, year = {2023}, pages = {1--8}, address = {Cambridge, MA}, doi = {10.1109/ACII59096.2023.10388135}, note = {Last visited on 01/25/2024} }
Kim, Küster, Girard & Krumhuber
Frontiers in Psychology
Copied!
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
@article{kim2023, title = {Human and Machine Recognition of Dynamic and Static Facial Expressions: Prototypicality, Ambiguity, and Complexity}, author = {Hyunwoo Kim and Dennis K{\"u}ster and Jeffrey M. Girard and Eva G. Krumhuber}, year = {2023}, journal = {Frontiers in Psychology}, volume = {14}, doi = {10.3389/fpsyg.2023.1221081}, note = {Last visited on 09/03/2025} }
Swartz, Bylsma, Fournier, Girard, Spotts, Cohn & Morency
Journal of Affective Disorders
Copied!
Background Expert consensus guidelines recommend Cognitive Behavioral Therapy (CBT) and Interpersonal Psychotherapy (IPT), interventions that were historically delivered face-to-face, as first-line treatments for Major Depressive Disorder (MDD). Despite the ubiquity of telehealth following the COVID-19 pandemic, little is known about differential outcomes with CBT versus IPT delivered in-person (IP) or via telehealth (TH) or whether working alliance is affected. Methods Adults meeting DSM-5 criteria for MDD were randomly assigned to either 8 sessions of IPT or CBT (group). Mid-trial, COVID-19 forced a change of therapy delivery from IP to TH (study phase). We compared changes in Hamilton Rating Scale for Depression (HRSD-17) and Working Alliance Inventory (WAI) scores for individuals by group and phase: CBT-IP (n = 24), CBT-TH (n = 11), IPT-IP (n = 25) and IPT-TH (n = 17). Results HRSD-17 scores declined significantly from pre to post treatment (pre: M = 17.7
@article{swartz2023, title = {Randomized Trial of Brief Interpersonal Psychotherapy and Cognitive Behavioral Therapy for Depression Delivered Both In-Person and by Telehealth}, author = {Holly A Swartz and Lauren M Bylsma and Jay C Fournier and Jeffrey M Girard and Crystal Spotts and Jeffrey F Cohn and Louis-Philippe Morency}, year = {2023}, journal = {Journal of Affective Disorders}, volume = {333}, pages = {543--552}, doi = {10.1016/j.jad.2023.04.092} }
Vail, Girard, Bylsma, Fournier, Swartz, Cohn & Morency
Proceedings of the 25th International Conference on Multimodal Interaction
Copied!
Characterizing the dynamics of behavior across multiple modalities and individuals is a vital component of computational behavior analysis. This is especially important in certain applications, such as psychotherapy, where individualized tracking of behavior patterns can provide valuable information about the patient's mental state. Conventional methods that rely on aggregate statistics and correlational metrics may not always suffice, as they are often unable to capture causal relationships or evaluate the true probability of identified patterns. To address these challenges, we present a novel approach to learning multimodal and interpersonal representations of behavior dynamics during one-on-one interaction. Our approach is enabled by the introduction of a multiview extension of latent change score models, which facilitates the concurrent capture of both inter-modal and interpersonal behavior dynamics and the identification of directional relationships between them. A core advantage of our approach is its high level of interpretability while simultaneously achieving strong predictive performance. We evaluate our approach within the domain of therapist-client interactions, with the objective of gaining a deeper understanding about the collaborative relationship between the two, a crucial element of the therapeutic process. Our results demonstrate improved performance over conventional approaches that rely upon summary statistics or correlational metrics. Furthermore, since our multiview approach includes the explicit modeling of uncertainty, it naturally lends itself to integration with probabilistic classifiers, such as Gaussian process models. We demonstrate that this integration leads to even further improved performance, all the while maintaining highly interpretable qualities. Our analysis provides compelling motivation for further exploration of stochastic systems within computational models of behavior.
@inproceedings{vail2023, title = {Representation Learning for Interpersonal and Multimodal Behavior Dynamics: A Multiview Extension of Latent Change Score Models}, booktitle = {Proceedings of the 25th {{International Conference}} on {{Multimodal Interaction}}}, author = {Alexandria K. Vail and Jeffrey M. Girard and Lauren M. Bylsma and Jay Fournier and Holly A. Swartz and Jeffrey F. Cohn and Louis-Philippe Morency}, year = {2023}, series = {{{ICMI}} '23}, pages = {517--526}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, doi = {10.1145/3577190.3614118}, note = {Last visited on 09/02/2025} }

2022

Girard, Vail, Liebenthal, Brown, Kilciksiz, Pennant, Liebson, Öngür, Morency & Baker
Schizophrenia Research
Copied!
Objectives This study aimed to (1) determine the feasibility of collecting behavioral data from participants hospitalized with acute psychosis and (2) begin to evaluate the clinical information that can be computationally derived from such data. Methods Behavioral data was collected across 99 sessions from 38 participants recruited from an inpatient psychiatric unit. Each session started with a semi-structured interview modeled on a typical ``clinical rounds'' encounter and included administration of the Positive and Negative Syndrome Scale (PANSS). Analysis We quantified aspects of participants' verbal behavior during the interview using lexical, coherence, and disfluency features. We then used two complementary approaches to explore our second objective. The first approach used predictive models to estimate participants' PANSS scores from their language features. Our second approach used inferential models to quantify the relationships between individual language features and symptom measures. Results Our predictive models showed promise but lacked sufficient data to achieve clinically useful accuracy. Our inferential models identified statistically significant relationships between numerous language features and symptom domains. Conclusion Our interview recording procedures were well-tolerated and produced adequate data for transcription and analysis. The results of our inferential modeling suggest that automatic measurements of expressive language contain signals highly relevant to the assessment of psychosis. These findings establish the potential of measuring language during a clinical interview in a naturalistic setting and generate specific hypotheses that can be tested in future studies. This, in turn, will lead to more accurate modeling and better understanding of the relationships between expressive language and psychosis.
@article{girard2022, title = {Computational Analysis of Spoken Language in Acute Psychosis and Mania}, author = {Jeffrey M. Girard and Alexandria K. Vail and Einat Liebenthal and Katrina Brown and Can Misel Kilciksiz and Luciana Pennant and Elizabeth Liebson and Dost {\"O}ng{\"u}r and Louis-Philippe Morency and Justin T. Baker}, year = {2022}, journal = {Schizophrenia Research}, volume = {245}, pages = {97--115}, doi = {10.1016/j.schres.2021.06.040}, note = {Last visited on 02/20/2022} }
Vail, Girard, Bylsma, Cohn, Fournier, Swartz & Morency
Proceedings of the 24th ACM International Conference on Multimodal Interaction
Copied!
NA
@inproceedings{vail2022, title = {Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social Entrainment}, booktitle = {Proceedings of the 24th {{ACM International Conference}} on {{Multimodal Interaction}}}, author = {Alexandria K Vail and Jeffrey M Girard and Lauren M Bylsma and Jeffrey F Cohn and Jay Fournier and Holly A Swartz and Louis-Philippe Morency}, year = {2022}, pages = {487--494}, address = {Bengaluru, India}, doi = {10/gt3bwj} }
van Oest & Girard
Psychological Methods
Copied!
Van Oest (2019) developed a framework to assess interrater agreement for nominal categories and complete data. We generalize this framework to all four situations of nominal or ordinal categories and complete or incomplete data. The mathematical solution yields a chance-corrected agreement coefficient that accommodates any weighting scheme for penalizing rater disagreements and any number of raters and categories. By incorporating Bayesian estimates of the category proportions, the generalized coefficient also captures situations in which raters classify only subsets of items; that is, incomplete data. Furthermore, this coefficient encompasses existing chance-corrected agreement coefficients: the S-coefficient, Scott's pi, Fleiss' kappa, and Van Oest's uniform prior coefficient, all augmented with a weighting scheme and the option of incomplete data. We use simulation to compare these nested coefficients. The uniform prior coefficient tends to perform best, in particular, if one category has a much larger proportion than others. The gap with Scott's pi and Fleiss' kappa widens if the weighting scheme becomes more lenient to small disagreements and often if more item classifications are missing; missingness biases play a moderating role. The uniform prior coefficient often performs much better than the S-coefficient, but the S-coefficient sometimes performs best for small samples, missing data, and lenient weighting schemes. The generalized framework implies a new interpretation of chance-corrected weighted agreement coefficients: These coefficients estimate the probability that both raters in a pair assign an item to its correct category without guessing. Whereas Van Oest showed this interpretation for unweighted agreement, we generalize to weighted agreement. (PsycInfo Database Record (c) 2021 APA, all rights reserved) (Source: journal abstract) {$<$}strong xmlns:lang="en"{$>$}Translational Abstract---Many studies and assessments require classification of subjective items (e.g., text) into categories (e.g., based on content). To assess whether the results are reproducible, it is good practice to let two or more raters independently classify the items, compute the proportion of pairwise rater agreement, and adjust for agreement expected by chance. Most chance-corrected agreement coefficients assume nominal categories and include only full agreements in which raters choose the same category. However, many situations (e.g., point scales) imply ordinal categories, where raters may receive partial credit for disagreements, based on the distance of their chosen categories and captured by a weighting scheme. Furthermore, raters often classify only subsets of items, where the missing data occur either by accident or by design. The present study develops a framework to estimate chance-corrected agreement for all four combinations of nominal or ordinal categories and complete or incomplete data. The resulting coefficient requires only a few lines of programming code and captures several existing coefficients via different values of its input parameters; it augments all nested coefficients with a weighting scheme and the option of missing item classifications. We use simulation to compare the coefficient performances for different weighting schemes, missing data mechanisms, and category proportions: The so-called uniform prior coefficient often (but not always) performs best. Furthermore, our framework implies that chance-corrected agreement coefficients, both unweighted and weighted, estimate the probability that both raters in a pair assign an item to its correct category without guessing. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
@article{vanoest2022, title = {Weighting Schemes and Incomplete Data: A Generalized {{Bayesian}} Framework for Chance-Corrected Interrater Agreement}, author = {Rutger {van Oest} and Jeffrey M. Girard}, year = {2022}, journal = {Psychological Methods}, volume = {27}, number = {6}, pages = {1069--1088}, doi = {10.1037/met0000412}, note = {Last visited on 11/16/2021} }

2021

Sewall, Girard, Merranko, Hafeman, Goldstein, Strober, Hower, Weinstock, Yen, Ryan, Keller, Liao, Diler, Gill, Axelson, Birmaher & Goldstein
The Journal of Child Psychology and Psychiatry
Copied!
NA
@article{sewall2021, title = {A {{Bayesian}} Multilevel Analysis of the Longitudinal Associations between Relationship Quality and Suicidal Ideation and Attempts among Youth with Bipolar Disorder}, author = {Craig J. R. Sewall and Jeffrey M. Girard and John Merranko and Danella Hafeman and Benjamin I. Goldstein and Michael Strober and Heather Hower and Lauren M. Weinstock and Shirley Yen and Neal D. Ryan and Martin B. Keller and Fangzi Liao and Rasim S. Diler and Mary Kay Gill and Davidson Axelson and Boris Birmaher and Tina R. Goldstein}, year = {2021}, journal = {The Journal of Child Psychology and Psychiatry}, volume = {62}, number = {7}, pages = {905--9115}, doi = {10.1111/jcpp.13343} }
Vail, Girard, Bylsma, Cohn, Fournier, Swartz & Morency
Proceedings of the 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
Early client dropout is one of the most significant challenges facing psychotherapy: recent studies suggest that at least one in five clients will leave treatment prematurely. Clients may terminate therapy for various reasons, but one of the most common causes is the lack of a strong working alliance. The concept of working alliance captures the collaborative relationship between a client and their therapist when working toward the progress and recovery of the client seeking treatment. Unfortunately, clients are often unwilling to directly express dissatisfaction in care until they have already decided to terminate therapy. On the other side, therapists may miss subtle signs of client discontent during treatment before it is too late. In this work, we demonstrate that nonverbal behavior analysis may aid in bridging this gap. The present study focuses primarily on the head gestures of both the client and therapist, contextualized within conversational turn-taking actions between the pair during psychotherapy sessions. We identify multiple behavior patterns suggestive of an individual's perspective on the working alliance; interestingly, these patterns also differ between the client and the therapist. These patterns inform the development of predictive models for self-reported ratings of working alliance, which demonstrate significant predictive power for both client and therapist ratings. Future applications of such models may stimulate preemptive intervention to strengthen a weak working alliance, whether explicitly attempting to repair the existing alliance or establishing a more suitable client-therapist pairing, to ensure that clients encounter fewer barriers to receiving the treatment they need.
@inproceedings{vail2021, title = {Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist versus Client Perception of Working Alliance}, booktitle = {Proceedings of the 16th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Alexandria K. Vail and Jeffrey M. Girard and Lauren Bylsma and Jeffrey F. Cohn and Jay Fournier and Holly A. Swartz and Louis-Philippe Morency}, year = {2021}, pages = {1--8}, doi = {10/gpfjrn} }
Bowdring, Sayette, Girard & Woods
Journal of Nonverbal Behavior
Copied!
Physical attractiveness plays a central role in psychosocial experiences. One of the top research priorities has been to identify factors affecting perceptions of physical attractiveness (PPA). Recent work suggests PPA derives from different sources (e.g., target, perceiver, stimulus type). Although smiles in particular are believed to enhance PPA, support has been surprisingly limited. This study comprehensively examines the effect of smiles on PPA and, more broadly, evaluates the roles of target, perceiver, and stimulus type in PPA variation. Perceivers (n\,=\,181) rated both static images and 5-s videos of targets displaying smiling and neutral-expressions. Smiling images were rated as more attractive than neutral-expression images (regardless of stimulus motion format). Interestingly, perceptions of physical attractiveness were based more on the perceiver than on either the target or format in which the target was presented. Results clarify the effect of smiles, and highlight the significant role of the perceiver, in PPA.
@article{bowdring2021, title = {In the Eye of the Beholder: A Comprehensive Analysis of Stimulus Type, Perceiver, and Target in Physical Attractiveness Perceptions}, author = {Molly A. Bowdring and Michael A. Sayette and Jeffrey M. Girard and William C. Woods}, year = {2021}, journal = {Journal of Nonverbal Behavior}, volume = {45}, number = {2}, pages = {241--259}, doi = {10.1007/s10919-020-00350-2}, note = {Last visited on 01/10/2021} }
Girard, Cohn, Yin & Morency
Affective Science
Copied!
The common view of emotional expressions is that certain configurations of facial-muscle movements reliably reveal certain categories of emotion. The principal exemplar of this view is the Duchenne smile, a configuration of facial-muscle movements (i.e., smiling with eye constriction) that has been argued to reliably reveal genuine positive emotion. In this paper, we formalized a list of hypotheses that have been proposed regarding the Duchenne smile, briefly reviewed the literature weighing on these hypotheses, identified limitations and unanswered questions, and conducted two empirical studies to begin addressing these limitations and answering these questions. Both studies analyzed a database of 751 smiles observed while 136 participants completed experimental tasks designed to elicit amusement, embarrassment, fear, and physical pain. Study 1 focused on participants' self-reported positive emotion and Study 2 focused on how third-party observers would perceive videos of these smiles. Most of the hypotheses that have been proposed about the Duchenne smile were either contradicted by or only weakly supported by our data. Eye constriction did provide some information about experienced positive emotion, but this information was lacking in specificity, already provided by other smile characteristics, and highly dependent on context. Eye constriction provided more information about perceived positive emotion, including some unique information over other smile characteristics, but context was also important here as well. Overall, our results suggest that accurately inferring positive emotion from a smile requires more sophisticated methods than simply looking for the presence/absence (or even the intensity) of eye constriction.
@article{girard2021, title = {Reconsidering the {{Duchenne}} Smile: Formalizing and Testing Hypotheses about Eye Constriction and Positive Emotion}, author = {Jeffrey M. Girard and Jeffrey F. Cohn and Lijun Yin and Louis-Philippe Morency}, year = {2021}, journal = {Affective Science}, volume = {2}, number = {1}, pages = {32--47}, doi = {10.1007/s42761-020-00030-w}, note = {Last visited on 01/18/2021} }
Wolfert, Girard, Kucherenko & Belpaeme
Proceedings of the 23rd International Conference on Multimodal Interaction
Copied!
While automatic performance metrics are crucial for machine learning of artificial human-like behaviour, the gold standard for evaluation remains human judgement. The subjective evaluation of artificial human-like behaviour in embodied conversational agents is however expensive and little is known about the quality of the data it returns. Two approaches to subjective evaluation can be largely distinguished, one relying on ratings, the other on pairwise comparisons. In this study we use co-speech gestures to compare the two against each other and answer questions about their appropriateness for evaluation of artificial behaviour. We consider their ability to rate quality, but also aspects pertaining to the effort of use and the time required to collect subjective data. We use crowd sourcing to rate the quality of co-speech gestures in avatars, assessing which method picks up more detail in subjective assessments. We compared gestures generated by three different machine learning models with various level of behavioural quality. We found that both approaches were able to rank the videos according to quality and that the ranking significantly correlated, showing that in terms of quality there is no preference of one method over the other. We also found that pairwise comparisons were slightly faster and came with improved inter-rater reliability, suggesting that for small-scale studies pairwise comparisons are to be favoured over ratings.
@inproceedings{wolfert2021, title = {To Rate or Not to Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures}, booktitle = {Proceedings of the 23rd {{International Conference}} on {{Multimodal Interaction}}}, author = {Pieter Wolfert and Jeffrey M. Girard and Taras Kucherenko and Tony Belpaeme}, year = {2021}, pages = {494--502}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, doi = {10.1145/3462244.3479889}, note = {Last visited on 02/17/2022} }

2020

Lin, Girard & Morency
Proceedings of the 3rd Workshop on Affective Content Analysis Co-Located with the 34th AAAI Conference on Artificial Intelligence
Copied!
In recent years, extensive research has emerged in affective computing on topics like automatic emotion recognition and determining the signals that characterize individual emotions. Much less studied, however, is expressiveness---the extent to which someone shows any feeling or emotion. Expressiveness is related to personality and mental health and plays a crucial role in social interaction. As such, the ability to automatically detect or predict expressiveness can facilitate significant advancements in areas ranging from psychiatric care to artificial social intelligence. Motivated by these potential applications, we present an extension of the BP4D+ dataset (Zhang et al. 2016) with human ratings of expressiveness and develop methods for (1) automatically predicting expressiveness from visual data and (2) defining relationships between interpretable visual signals and expressiveness. In addition, we study the emotional context in which expressiveness occurs and hypothesize that different sets of signals are indicative of expressiveness in different contexts (e.g., in response to surprise or in response to pain). Analysis of our statistical models confirms our hypothesis. Consequently, by looking at expressiveness separately in distinct emotional contexts, our predictive models show significant improvements over baselines and achieve comparable results to human performance in terms of correlation with the ground truth.
@inproceedings{lin2020a, title = {Context-Dependent Models for Predicting and Characterizing Facial Expressiveness}, booktitle = {Proceedings of the 3rd {{Workshop}} on {{Affective Content Analysis}} Co-Located with the 34th {{AAAI Conference}} on {{Artificial Intelligence}}}, author = {Victoria Lin and Jeffrey M. Girard and Louis-Philippe Morency}, year = {2020}, volume = {2614}, pages = {11--28}, publisher = {AAAI Press}, address = {New York, NY} }
Muszynski, Zelazny, Girard & Morency
Proceedings of the 22nd International Conference on Multimodal Interaction
Copied!
Recent progress in artificial intelligence has led to the development of automatic behavioral marker recognition, such as facial and vocal expressions. Those automatic tools have enormous potential to support mental health assessment, clinical decision making, and treatment planning.
@inproceedings{muszynski2020, title = {Depression Severity Assessment for Adolescents at High Risk of Mental Disorders}, booktitle = {Proceedings of the 22nd {{International Conference}} on {{Multimodal Interaction}}}, author = {Michal Muszynski and Jamie Zelazny and Jeffrey M. Girard and Louis-Philippe Morency}, year = {2020}, pages = {70--78}, publisher = {ACM}, address = {Virtual Event Netherlands}, doi = {10.1145/3382507.3418859}, note = {Last visited on 10/26/2020} }
Hopwood, Harrison, Amole, Girard, Wright, Thomas, Sadler, Ansell, Chaplin, Morey, Crowley, Durbin & Kashy
Assessment
Copied!
NA
@article{hopwood2020, title = {Properties of the Continuous Assessment of Interpersonal Dynamics across Sex, Level of Familiarity, and Interpersonal Conflict}, author = {Christopher J. Hopwood and Alana L. Harrison and Marlissa C Amole and Jeffrey M. Girard and Aidan G. C. Wright and Katherine M. Thomas and Pamela Sadler and Emily B. Ansell and Tara M. Chaplin and Leslie C. Morey and Michael J. Crowley and C. Emily Durbin and Deborah A. Kashy}, year = {2020}, journal = {Assessment}, volume = {27}, number = {1}, pages = {40--56}, doi = {10.1177/1073191118798916} }
Lin, Girard, Sayette & Morency
Proceedings of the 22nd International Conference on Multimodal Interaction
Copied!
Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual, linguistic, and multimodal behavioral signals? Second, how important is each behavioral modality to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset ({\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend} = 0.65, {\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend}2 = 0.45, {\dbend}{\dbend}{\dbend}{\dbend}{\dbend}{\dbend} = 0.74). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally, examination of our interpretable models' coefficients reveals a number of visual and linguistic behavioral signals---such as facial action unit intensity, overall word count, and use of words related to social processes---that reliably predict emotional expressiveness.
@inproceedings{lin2020, title = {Toward Multimodal Modeling of Emotional Expressiveness}, booktitle = {Proceedings of the 22nd {{International Conference}} on {{Multimodal Interaction}}}, author = {Victoria Lin and Jeffrey M. Girard and Michael A. Sayette and Louis-Philippe Morency}, year = {2020}, pages = {548--557}, publisher = {ACM}, address = {Virtual Event Netherlands}, doi = {10.1145/3382507.3418887}, note = {Last visited on 10/26/2020} }

2019

Gerber, Girard, Scott & Lerner
Behaviour Research and Therapy
Copied!
Objective While much is known about the quality of social behavior among neurotypical individuals and those with autism spectrum disorder (ASD), little work has evaluated quantity of social interactions. This study used ecological momentary assessment (EMA) to quantify in vivo daily patterns of social interaction in adults as a function of demographic and clinical factors. Method Adults with and without ASD (NASD\,=\,23, NNeurotypical\,=\,52) were trained in an EMA protocol to report their social interactions via smartphone over one week. Participants completed measures of IQ, ASD symptom severity and alexithymia symptom severity. Results Cyclical multilevel models were used to account for nesting of observations. Results suggest a daily cyclical pattern of social interaction that was robust to ASD and alexithymia symptoms. Adults with ASD did not have fewer social interactions than neurotypical peers; however, severity of alexithymia symptoms predicted fewer social interactions regardless of ASD status. Conclusions These findings suggest that alexithymia, not ASD severity, may drive social isolation and highlight the need to reevaluate previously accepted notions regarding differences in social behavior utilizing modern methods.
@article{gerber2019, title = {Alexithymia -- Not Autism -- Is Associated with Frequency of Social Interactions in Adults}, author = {Alan H. Gerber and Jeffrey M. Girard and Stacey B. Scott and Matthew D. Lerner}, year = {2019}, journal = {Behaviour Research and Therapy}, volume = {123}, pages = {103477}, doi = {10.1016/j.brat.2019.103477} }
McDuff & Girard
Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII)
Copied!
NA
@inproceedings{mcduff2019, title = {Democratizing Psychological Insights from Analysis of Nonverbal Behavior}, booktitle = {Proceedings of the 8th {{International Conference}} on {{Affective Computing}} and {{Intelligent Interaction}} ({{ACII}})}, author = {Daniel McDuff and Jeffrey M. Girard}, year = {2019}, pages = {220--226}, publisher = {IEEE}, address = {Cambridge, UK}, doi = {10.1109/acii.2019.8925503} }
Grove, Smith, Girard & Wright
Journal of Personality Disorders
Copied!
NA
@article{grove2019, title = {Narcissistic Admiration and Rivalry: An Interpersonal Approach to Construct Validation}, author = {Jeremy L. Grove and Timothy W. Smith and Jeffrey M. Girard and Aidan G. Wright}, year = {2019}, journal = {Journal of Personality Disorders}, volume = {33}, number = {6}, pages = {751--775}, doi = {10.1521/pedi_2019_33_374}, note = {Last visited on 01/17/2020} }
Girard, Shandar, Liu, Cohn, Yin & Morency
Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII)
Copied!
NA
@inproceedings{girard2019, title = {Reconsidering the Duchenne Smile: Indicator of Positive Emotion or Artifact of Smile Intensity?}, booktitle = {Proceedings of the 8th {{International Conference}} on {{Affective Computing}} and {{Intelligent Interaction}} ({{ACII}})}, author = {Jeffrey M. Girard and Gayatri Shandar and Zhun Liu and Jeffrey F. Cohn and Lijun Yin and Louis-Philippe Morency}, year = {2019}, pages = {594--599}, publisher = {IEEE}, address = {Cambridge, UK}, doi = {10.1109/acii.2019.8925535} }

2018

Cohn, Ertugrul, Chu, Girard & Hammal
Multimodal Behavior Analysis in the Wild: Advances and Challenges
Copied!
NA
@incollection{cohn2018, title = {Affective Facial Computing: Generalizability across Domains}, booktitle = {Multimodal Behavior Analysis in the Wild: {{Advances}} and Challenges}, author = {Jeffrey F. Cohn and Itir Onal Ertugrul and Wen-Sheng Chu and Jeffrey M. Girard and Zakia Hammal}, editor = {Xavier Alameda-Pineda and Elisa Ricci and Nicu Sebe}, year = {2018}, pages = {407--441}, publisher = {Academic Press}, url = {https://shop.elsevier.com/books/multimodal-behavior-analysis-in-the-wild/alameda-pineda/978-0-12-814601-9} }
Girard & Wright
Behavior Research Methods
Copied!
NA
@article{girard2018, title = {{{DARMA}}: Software for Dual Axis Rating and Media Annotation}, author = {Jeffrey M. Girard and Aidan G C Wright}, year = {2018}, journal = {Behavior Research Methods}, volume = {50}, number = {3}, pages = {902--909}, doi = {10.3758/s13428-017-0915-5} }
Pacella, Girard, Wright, Suffoletto & Callaway
Academic Emergency Medicine
Copied!
NA
@article{pacella2018, title = {The Association between Daily Posttraumatic Stress Symptoms and Pain over the First 14 Days after Injury: {{An}} Experience Sampling Study}, author = {Maria L. Pacella and Jeffrey M. Girard and Aidan G. C. Wright and Brian Suffoletto and Clifton W. Callaway}, year = {2018}, journal = {Academic Emergency Medicine}, volume = {25}, number = {8}, pages = {844--855}, doi = {10.1111/acem.13406} }

2017

Shepherd, Sly & Girard
Journal of Adolescence
Copied!
The purpose of this study was to identify predictors of sexual behavior and condom use in African American adolescents, as well as to evaluate the effectiveness of comprehensive sexuality and abstinence-only education to reduce adolescent sexual behavior and increase condom use. Participants included 450 adolescents aged 12--14 years in the southern United States. Regression analyses showed favorable attitudes toward sexual behavior and social norms significantly predicted recent sexual behavior, and favorable attitudes toward condoms significantly predicted condom usage. Self-efficacy was not found to be predictive of adolescents' sexual behavior or condom use. There were no significant differences in recent sexual behavior based on type of sexuality education. Adolescents who received abstinence-only education had reduced favorable attitudes toward condom use, and were more likely to have unprotected sex than the comparison group. Findings suggest that adolescents who receive abstinence-only education are at greater risk of engaging in unprotected sex.
@article{shepherd2017, title = {Comparison of Comprehensive and Abstinence-Only Sexuality Education in Young African American Adolescents}, author = {Lindsay M. Shepherd and Kaye F. Sly and Jeffrey M. Girard}, year = {2017}, journal = {Journal of Adolescence}, volume = {61}, pages = {50--63}, doi = {10.1016/j.adolescence.2017.09.006} }
Valstar, Sanchez-Lozano, Cohn, Jeni, Girard, Zhang, Yin & Pantic
Proceedings of the 12th IEEE Conference on Automatic Face and Gesture Recognition (FG)
Copied!
The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.
@inproceedings{valstar2017, title = {{{FERA}} 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge}, booktitle = {Proceedings of the 12th {{IEEE Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Michel F Valstar and Enrique Sanchez-Lozano and Jeffrey F. Cohn and Laszlo A Jeni and Jeffrey M. Girard and Zheng Zhang and Lijun Yin and Maja Pantic}, year = {2017}, pages = {839--847}, publisher = {IEEE}, address = {Washington, DC}, doi = {10.1109/fg.2017.107} }
Girard & McDuff
Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
{\copyright} 2017 IEEE. Facial behavior is a valuable source of informationabout an individual's feelings and intentions. However, manyfactors combine to influence and moderate facial behaviorincluding personality, gender, context, and culture. Due to thehigh cost of traditional observational methods, the relationshipbetween culture and facial behavior is not well-understood. Inthe current study, we explored the sociocultural factors that influencefacial behavior using large-scale observational analyses.We developed and implemented an algorithm to automaticallyanalyze the smiling of 866,726 participants across 31 differentcountries. We found that participants smiled more when from acountry that is higher in individualism, has a lower populationdensity, and has a long history of immigration diversity (i.e.,historical heterogeneity). Our findings provide the first evidencethat historical heterogeneity predicts actual smiling behavior.Furthermore, they converge with previous findings using selfreportmethods. Taken together, these findings support thetheory that historical heterogeneity explains, and may evencontribute to the development of, permissive cultural displayrules that encourage the open expression of emotion.
@inproceedings{girard2017a, title = {Historical Heterogeneity Predicts Smiling: Evidence from Large-Scale Observational Analyses}, booktitle = {Proceedings of the 12th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Jeffrey M. Girard and Daniel McDuff}, year = {2017}, pages = {719--726}, publisher = {IEEE}, address = {Washington, DC}, doi = {10.1109/fg.2017.135} }
Girard, Wright, Beeney, Lazarus, Scott, Stepp & Pilkonis
Comprehensive Psychiatry
Copied!
NA
@article{girard2017, title = {Interpersonal Problems across Levels of the Psychopathology Hierarchy}, author = {Jeffrey M. Girard and Aidan G. C. Wright and Joseph E Beeney and Sophie A Lazarus and Lori N Scott and Stephanie D Stepp and Paul A Pilkonis}, year = {2017}, journal = {Comprehensive Psychiatry}, volume = {79}, pages = {53--69}, doi = {10.1016/j.comppsych.2017.06.014} }
McDuff, Girard & El Kaliouby
Journal of Nonverbal Behavior
Copied!
Self-report studies have found evidence that cultures differ in the display rules they have for facial expressions (i.e., for what is appropriate for different people at different times). However, observational studies of actual patterns of facial behavior have been rare and typically limited to the analysis of dozens of participants from two or three regions. We present the first large-scale evidence of cultural differences in observed facial behavior, including 740,984 participants from 12 countries around the world. We used an Internet-based framework to collect video data of participants in two different settings: in their homes and in market research facilities. Using computer vision algorithms designed for this data set, we measured smiling and brow furrowing expressions as participants watched television ads. Our results reveal novel findings and provide empirical evidence to support theories about cultural and gender differences in display rules. Participants from more individualist cultures displayed more brow furrowing overall, whereas smiling depended on both culture and setting. Specifically, participants from more individualist countries were more expressive in the facility setting, while participants from more collectivist countries were more expressive in the home setting. Female participants displayed more smiling and less brow furrowing than male participants overall, with the latter difference being more pronounced in more individualist countries. This is the first study to leverage advances in computer science to enable large-scale observational research that would not have been possible using traditional methods
@article{mcduff2017, title = {Large-Scale Observational Evidence of Cross-Cultural Differences in Facial Behavior}, author = {Daniel McDuff and Jeffrey M. Girard and Rana {El Kaliouby}}, year = {2017}, journal = {Journal of Nonverbal Behavior}, volume = {41}, number = {1}, pages = {1--19}, doi = {10/f92t52} }
Ross, Girard, Wright, Beeney, Scott, Hallquist, Lazarus, Stepp & Pilkonis
Psychological Assessment
Copied!
NA
@article{ross2017, title = {Momentary Patterns of Covariation between Specific Affects and Interpersonal Behavior: {{Linking}} Relationship Science and Personality Assessment}, author = {Jaclyn M. Ross and Jeffrey M. Girard and Aidan G. C. Wright and Joseph E Beeney and Lori N. Scott and Michael N. Hallquist and Sophie A. Lazarus and Stephanie D. Stepp and Paul A. Pilkonis}, year = {2017}, journal = {Psychological Assessment}, volume = {29}, number = {2}, pages = {123--134}, doi = {10.1037/pas0000338} }
Girard
Proceedings of the 12th IEEE International Conference on Automated Face and Gesture Recognition (FG)
Copied!
{\copyright} 2017 IEEE. Full understanding of behavior and experience requires anappreciation of time-dependent patterns. However, traditionalmethods of observational measurement and self-reportingare ill-suited to capturing such patterns. These methodstend to polarize into either macro-level (gist) analyses oflarge swaths of time or micro-level (atomic) analyses ofdiscrete segments. Unfortunately, both approaches miss thecontinuous, dynamic flow of many psychological processes.Specialized methods are needed that can capture such processesas they unfold over time and across dimensions.
@inproceedings{girard2017c, title = {Open-Source Software for Continuous Measurement and Media Annotation}, booktitle = {Proceedings of the 12th {{IEEE}} International Conference on Automated Face and Gesture Recognition ({{FG}})}, author = {Jeffrey M. Girard}, year = {2017}, volume = {31}, pages = {995--995}, doi = {10.1109/fg.2017.151} }
Girard, Chu, Jeni, Cohn, De La Torre & Sayette
Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
{\copyright} 2017 IEEE. Despite the important role that facial expressionsplay in interpersonal communication and our knowledge thatinterpersonal behavior is influenced by social context, nocurrently available facial expression database includes multipleinteracting participants. The Sayette Group Formation Task(GFT) database addresses the need for well-annotated videoof multiple participants during unscripted interactions. Thedatabase includes 172,800 video frames from 96 participantsin 32 three-person groups. To aid in the development ofautomated facial expression analysis systems, GFT includesexpert annotations of FACS occurrence and intensity, faciallandmark tracking, and baseline results for linear SVM, deeplearning, active patch learning, and personalized classification.Baseline performance is quantified and compared using identicalpartitioning and a variety of metrics (including meansand confidence intervals). The highest performance scores werefound for the deep learning and active patch learning methods.Learn more at http://osf.io/7wcyz.
@inproceedings{girard2017b, title = {Sayette Group Formation Task ({{GFT}}) Spontaneous Facial Expression Database}, booktitle = {Proceedings of the 12th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Jeffrey M. Girard and Wen-Sheng Chu and Laszlo A. Jeni and Jeffrey F. Cohn and F. {De La Torre} and Michael A. Sayette}, year = {2017}, pages = {581--588}, publisher = {IEEE}, address = {Washington, DC}, doi = {10.1109/fg.2017.144} }

2016

Girard & Cohn
Assessment
Copied!
Observational measurement plays an integral role in a variety of scientific endeavors within biology, psychology, sociology, education, medicine, and marketing. The current article provides an interdisciplinary primer on observational measurement; in particular, it highlights recent advances in observational methodology and the challenges that accompany such growth. First, we detail the various types of instrument that can be used to standardize measurements across observers. Second, we argue for the importance of validity in observational measurement and provide several approaches to validation based on contemporary validity theory. Third, we outline the challenges currently faced by observational researchers pertaining to measurement drift, observer reactivity, reliability analysis, and time/expense. Fourth, we describe recent advances in computer-assisted measurement, fully automated measurement, and statistical data analysis. Finally, we identify several key directions for future observational research to explore.
@article{girard2016a, title = {A Primer on Observational Measurement}, author = {Jeffrey M. Girard and Jeffrey F. Cohn}, year = {2016}, journal = {Assessment}, volume = {23}, number = {4}, pages = {404--413}, doi = {10.1177/1073191116635807}, note = {Last visited on 01/13/2019} }
Zhang, Girard, Wu, Zhang, Liu, Ciftci, Canavan, Reale, Horowitz, Yang, Cohn, Ji & Yin
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Copied!
NA
@inproceedings{zhang2016, title = {Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis}, booktitle = {Proceedings of the {{IEEE Conference}} on {{Computer Vision}} and {{Pattern Recognition}} ({{CVPR}})}, author = {Zheng Zhang and Jeffrey M. Girard and Yue Wu and Xing Zhang and Peng Liu and Umur Ciftci and Shaun Canavan and Michael Reale and Andrew Horowitz and Huiyuan Yang and Jeffrey F. Cohn and Qiang Ji and Lijun Yin}, year = {2016}, pages = {3438--3446}, publisher = {IEEE}, address = {Las Vegas, NV}, doi = {10.1109/cvpr.2016.374} }

2015

Girard & Cohn
Current Opinion in Psychology
Copied!
Analysis of observable behavior in depression primarily relies on subjective measures. New computational approaches make possible automated audiovisual measurement of behaviors that humans struggle to quantify (e.g., movement velocity and voice inflection). These tools have the potential to improve screening and diagnosis, identify new behavioral indicators of depression, measure response to clinical intervention, and test clinical theories about underlying mechanisms. Highlights include a study that measured the temporal coordination of vocal tract and facial movements, a study that predicted which adolescents would go on to develop depression based on their voice qualities, and a study that tested the behavioral predictions of clinical theories using automated measures of facial actions and head motion.
@article{girard2015c, title = {Automated Audiovisual Depression Analysis}, author = {Jeffrey M. Girard and Jeffrey F. Cohn}, year = {2015}, journal = {Current Opinion in Psychology}, volume = {4}, pages = {75--79}, doi = {10.1016/j.copsyc.2014.12.010} }
Girard, Cohn & De la Torre
Pattern Recognition Letters
Copied!
{\copyright} 2014 Elsevier B.V. All rights reserved. Both the occurrence and intensity of facial expressions are critical to what the face reveals. While much progress has been made toward the automatic detection of facial expression occurrence, controversy exists about how to estimate expression intensity. The most straight-forward approach is to train multiclass or regression models using intensity ground truth. However, collecting intensity ground truth is even more time consuming and expensive than collecting binary ground truth. As a shortcut, some researchers have proposed using the decision values of binary-trained maximum margin classifiers as a proxy for expression intensity. We provide empirical evidence that this heuristic is flawed in practice as well as in theory. Unfortunately, there are no shortcuts when it comes to estimating smile intensity: researchers must take the time to collect and train on intensity ground truth. However, if they do so, high reliability with expert human coders can be achieved. Intensity-trained multiclass and regression models outperformed binary-trained classifier decision values on smile intensity estimation across multiple databases and methods for feature extraction and dimensionality reduction. Multiclass models even outperformed binary-trained classifiers on smile occurrence detection.
@article{girard2015b, title = {Estimating Smile Intensity: A Better Way}, author = {Jeffrey M. Girard and Jeffrey F. Cohn and Fernando {De la Torre}}, year = {2015}, journal = {Pattern Recognition Letters}, volume = {66}, pages = {13--21}, doi = {10/f7tkjg} }
Valstar, Almaev, Girard, McKeown, Mehu, Yin, Pantic & Cohn
Proceedings of the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
Despite efforts towards evaluation standards in facial expression analysis (e.g. FERA 2011), there is a need for up-to-date standardised evaluation procedures, focusing in par- ticular on current challenges in the field. One of the challenges that is actively being addressed is the automatic estimation of expression intensities. To continue to provide a standardisation platform and to help the field progress beyond its current limitations, the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015) will challenge participants to estimate FACS Action Unit (AU) intensity as well as AU occurrence on a common benchmark dataset with reliable manual annotations. Evaluation will be done using a clear and well-defined protocol. In this paper we present the second such challenge in automatic recognition of facial expressions, to be held in conjunction with the 11 IEEE conference on Face and Gesture Recognition, May 2015, in Ljubljana, Slovenia. Three sub-challenges are defined: the detection of AU occurrence, the estimation of AU intensity for pre-segmented data, and fully automatic AU intensity estimation. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for the three sub-challenges.
@inproceedings{valstar2015, title = {{{FERA}} 2015 - Second Facial Expression Recognition and Analysis Challenge}, booktitle = {Proceedings of the 11th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Michel F. Valstar and Timur Almaev and Jeffrey M. Girard and Gary McKeown and Marc Mehu and Lijun Yin and Maja Pantic and Jeffrey F. Cohn}, year = {2015}, pages = {1--8}, publisher = {IEEE}, address = {Ljubljana, Slovenia}, doi = {10.1109/fg.2015.7284874} }
Girard, Cohn, Jeni, Lucey & De la Torre
Proceedings of the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
NA
@inproceedings{girard2015d, title = {How Much Training Data for Facial Action Unit Detection?}, booktitle = {Proceedings of the 11th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Jeffrey M. Girard and Jeffrey F Cohn and L{\a'a}szl{\a'o} A Jeni and Simon Lucey and Fernando {De la Torre}}, year = {2015}, pages = {1--8}, publisher = {IEEE}, address = {Ljubljana, Slovenia}, doi = {10.1109/fg.2015.7163106} }
Jeni, Girard, Cohn & Kanade
IEEE International Conference and Workshops on Automatic Face and Gesture Recognition
Copied!
Face alignment is the problem of automatically locating detailed facial landmarks across different subjects, illuminations, and viewpoints. Previous methods can be divided into two broad categories. 2D-based methods locate a relatively small number of 2D fiducial points in real time while 3D-based methods fit a high-resolution 3D model offline at a much higher computational cost.
@inproceedings{jeni2015, title = {Real-Time Dense {{3D}} Face Alignment from {{2D}} Video with Automatic Facial Action Unit Coding}, booktitle = {{{IEEE International Conference}} and {{Workshops}} on {{Automatic Face}} and {{Gesture Recognition}}}, author = {L{\a'a}szl{\a'o} A Jeni and Jeffrey M. Girard and Jeffrey F. Cohn and Takeo Kanade}, year = {2015}, doi = {10.1109/fg.2015.7163165} }
Fairbairn, Sayette, Amole, Dimoff, Cohn & Girard
Experimental and Clinical Psychopharmacology
Copied!
NA
@article{fairbairn2015, title = {Speech Volume Indexes Sex Differences in the Social-Emotional Effects of Alcohol}, author = {Catharine E. Fairbairn and Michael A. Sayette and Marlissa C. Amole and John D. Dimoff and Jeffrey F. Cohn and Jeffrey M. Girard}, year = {2015}, journal = {Experimental and Clinical Psychopharmacology}, volume = {23}, number = {4}, pages = {255--264}, doi = {10.1037/pha0000021} }
Girard, Cohn, Jeni, Sayette & De la Torre
Behavior Research Methods
Copied!
NA
@article{girard2015a, title = {Spontaneous Facial Expression in Unscripted Social Interactions Can Be Measured Automatically}, author = {Jeffrey M. Girard and Jeffrey F. Cohn and L{\a'a}szl{\a'o} A. Jeni and Michael A. Sayette and Fernando {De la Torre}}, year = {2015}, journal = {Behavior Research Methods}, volume = {47}, number = {4}, pages = {1136--1147}, doi = {10.3758/s13428-014-0536-1} }

2014

Zhang, Yin, Cohn, Canavan, Reale, Horowitz, Liu & Girard
Image and Vision Computing
Copied!
Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka ``spontaneous'') facial expressions differ along several dimensions including complexity and timing,well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required.We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this newdatabase is the first of its kind for the public. Thework promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.
@article{zhang2014, title = {{{BP4D-spontaneous}}: A High-Resolution Spontaneous {{3D}} Dynamic Facial Expression Database}, author = {Xing Zhang and Lijun Yin and Jeffrey F. Cohn and Shaun Canavan and Michael Reale and Andy Horowitz and Peng Liu and Jeffrey M. Girard}, year = {2014}, journal = {Image and Vision Computing}, volume = {32}, number = {10}, pages = {692--706}, doi = {10.1016/j.imavis.2014.06.002} }
Girard
Journal of Open Research Software
Copied!
CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. Based on Gottman and Levenson's affect rating dial, CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse or keyboard. The rating scale can be configured on a number of parameters including the labels for its upper and lower bounds, its numerical range, and its visual representation. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time.
@article{girard2014b, title = {{{CARMA}}: Software for Continuous Affect Rating and Media Annotation}, author = {Jeffrey M. Girard}, year = {2014}, journal = {Journal of Open Research Software}, volume = {2}, number = {1}, pages = {e5-e5}, doi = {10.5334/jors.ar} }
Girard, Cohn, Mahoor, Mavadati, Hammal & Rosenwald
Image and Vision Computing
Copied!
The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.
@article{girard2014, title = {Nonverbal Social Withdrawal in Depression: {{Evidence}} from Manual and Automatic Analyses}, author = {Jeffrey M. Girard and Jeffrey F. Cohn and Mohammad H. Mahoor and S. Mohammad Mavadati and Zakia Hammal and Dean P. Rosenwald}, year = {2014}, journal = {Image and Vision Computing}, volume = {32}, number = {10}, pages = {641--647}, doi = {10.1016/j.imavis.2013.12.007} }
Girard
Proceedings of the 2014 International Conference on Multimodal Interaction
Copied!
Copyright 2014 ACM. Across multiple channels, nonverbal behavior communicates information about affective states and interpersonal intentions. Researchers interested in understanding how these nonverbal messages are transmitted and interpreted have examined the relationship between behavior and ratings of interpersonal motives using dimensions such as agency and communion. However, previous work has focused on images of posed behavior and it is unclear how well these results will generalize to more dynamic representations of real-world behavior. The current study proposes to extend the current literature by examining how gender, facial expression intensity, and head pose influence interpersonal ratings in videos of spontaneous nonverbal behavior.
@inproceedings{girard2014a, title = {Perceptions of Interpersonal Behavior Are Influenced by Gender, Facial Expression Intensity, and Head Pose}, booktitle = {Proceedings of the 2014 {{International Conference}} on {{Multimodal Interaction}}}, author = {Jeffrey M. Girard}, year = {2014}, pages = {394--398}, doi = {10.1145/2663204.2667575} }

2013

Jeni, Girard, Cohn & De la Torre
Proceedings of the 10th IEEE International Conference on Automated Face and Gesture Recognition (FG)
Copied!
Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.
@inproceedings{jeni2013, title = {Continuous {{AU}} Intensity Estimation Using Localized, Sparse Facial Feature Space}, booktitle = {Proceedings of the 10th {{IEEE International Conference}} on {{Automated Face}} and {{Gesture Recognition}} ({{FG}})}, author = {L{\a'a}szl{\a'o} A Jeni and Jeffrey M. Girard and Jeffrey F. Cohn and Fernando {De la Torre}}, year = {2013}, pages = {1--7}, publisher = {IEEE}, address = {Shanghai, China}, doi = {10.1109/fg.2013.6553808} }
Girard, Cohn, Mahoor, Mavadati & Rosenwald
Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition (FG)
Copied!
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the ``social risk hypothesis'' of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.
@inproceedings{girard2013, title = {Social Risk and Depression: {{Evidence}} from Manual and Automatic Facial Expression Analysis}, booktitle = {Proceedings of the 10th {{IEEE International Conference}} on {{Automatic Face}} and {{Gesture Recognition}} ({{FG}})}, author = {Jeffrey M. Girard and Jeffrey F. Cohn and Mohammad H. Mahoor and S Mohammad Mavadati and Dean P. Rosenwald}, year = {2013}, pages = {1--8}, publisher = {IEEE}, address = {Shanghai, China}, doi = {10.1109/fg.2013.6553748} }

2011

Girard & Cohn
Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops)
Copied!
Implementing a computerized facial expression analysis system for automatic coding requires that a threshold for the system's classifier outputs be selected. However, there are many potential ways to select a threshold. How do different criteria and metrics compare? Manually FACS coded video of 45 clinical interviews (Spectrum dataset) were processed using person-specific active appearance models (AAM). Support vector machine (SVM) classifiers were trained using an independent dataset (RU-FACS). Spectrum sessions were randomly assigned to training (n=32) and testing sets (n=13). Six different threshold selection criteria were compared for automatic A U coding. Three major findings emerged: 1) Thresholds that attempt to balance the confusion matrix (using kappa, F1, or MCC) performed significantly better on all metrics than thresholds that select arbitrary error or accuracy rates (such as TPR, FPR, or EER). 2) AU detection scores for kappa, F1, and MCC were highly intercorrelated; accuracy was uncorrelated with the others. And 3) Kappa, MCC, and Fl were all positively correlated with base rate. They increased with increases in AU base rates. Accuracy, by contrast, showed the opposite pattern. It was strongly negatively correlated with base rate. These findings suggest that better automatic coding can be obtained by using threshold-selection criteria that balance the confusion matrix and benefit from increased AU base rates in the training data.
@inproceedings{girard2011, title = {Criteria and Metrics for Thresholded {{AU}} Detection}, booktitle = {Proceedings of the {{IEEE International Conference}} on {{Computer Vision Workshops}} ({{ICCV Workshops}})}, author = {Jeffrey M. Girard and Jeffrey F. Cohn}, year = {2011}, pages = {2191--2197}, publisher = {IEEE}, address = {Barcelona, Spain}, doi = {10.1109/iccvw.2011.6130519} }

References

Aafjes-van Doorn, K., & Girard, J. M. (2025). From intuition to innovation: Empirical illustrations of multimodal measurement in psychotherapy research. Psychotherapy Research, 35(2), 171–173. https://doi.org/10/g824tc
Adaryukov, J., Biernat, M., Girard, J. M., Villicana, A. J., & Pleskac, T. J. (2025). Worth the weight: An examination of unstructured and structured data in graduate admissions. Decision, 12(1), 4–30. https://doi.org/10.1037/dec0000251
Agrawal, V., Akinyemi, A., Alvero, K., Behrooz, M., Buffalini, J., Carlucci, F. M., Chen, J., Chen, J., Chen, Z., Cheng, S., Chowdary, P., Chuang, J., D’Avirro, A., Daly, J., Dong, N., Duppenthaler, M., Gao, C., Girard, J., Gleize, M., … Zollhoefer, M. (2025). Seamless interaction: Dyadic audiovisual motion modeling and large-scale dataset. arXiv:2506.22554 [cs.CV].
Baber, G. R., Hamilton, N. A., Girard, J. M., Cohen, J. M., Gratton, M. K. P., Ellis, S., & Hemmer, E. (2024). It’s the sentiment that counts: Comparing sentiment analysis tools for estimating affective valence in dream reports. Sleep, 47(12), zsae210. https://doi.org/10.1093/sleep/zsae210
Bowdring, M. A., Sayette, M. A., Girard, J. M., & Woods, W. C. (2021). In the eye of the beholder: A comprehensive analysis of stimulus type, perceiver, and target in physical attractiveness perceptions. Journal of Nonverbal Behavior, 45(2), 241–259. https://doi.org/10.1007/s10919-020-00350-2
Butler, R. M., Christian, C., Girard, J. M., Vanzhula, I. A., & Levinson, C. A. (2024). Are within- and between-session changes in distress associated with treatment outcomes? Findings from two clinical trials of exposure for eating disorders. Behaviour Research and Therapy, 180, 104577. https://doi.org/10.1016/j.brat.2024.104577
Campbell, C., Girard, J. M., McDuff, D., & Rosengren, S. (2025). EXPRESS: How influencers grow: An empirical study and future research agenda. Journal of Interactive Marketing, 10949968251360683. https://doi.org/10.1177/10949968251360683
Caumiant, E. P., Kang, D., Girard, J. M., & Fairbairn, C. E. (2025). Alcohol and emotion: Analyzing convergence between facially expressed and self-reported indices of emotion under alcohol intoxication. Psychology of Addictive Behaviors. https://doi.org/10.1037/adb0001053
Chung, Y., Girard, J. M., Ravichandran, C., Öngür, D., Cohen, B. M., & Baker, J. T. (2025). Transdiagnostic modeling of clinician-rated symptoms in affective and nonaffective psychotic disorders. Journal of Psychopathology and Clinical Science, 134(1), 81–96. https://doi.org/10/g8pwmn
Cohn, J. F., Ertugrul, I. O., Chu, W.-S., Girard, J. M., & Hammal, Z. (2018). Affective facial computing: Generalizability across domains. In X. Alameda-Pineda, E. Ricci, & N. Sebe (Eds.), Multimodal behavior analysis in the wild: Advances and challenges (pp. 407–441). Academic Press. https://shop.elsevier.com/books/multimodal-behavior-analysis-in-the-wild/alameda-pineda/978-0-12-814601-9
Creswell, K. G., Wright, A. G. C., Sayette, M. A., Girard, J. M., Lyons, G., & Smyth, J. M. (2025). The effects of alcohol in groups of heavy-drinking young adults: A multimodal investigation of alcohol responses in a laboratory social setting. Clinical Psychological Science, 21677026251333784. https://doi.org/10.1177/21677026251333784
Edershile, E. A., Girard, J. M., Woods, W. C., Williams, T. F., Simms, L. J., & Wright, A. G. C. (2025). Narcissism from every angle: An interpersonal analysis of narcissism in young adults. Assessment, 10731911251356150. https://doi.org/10.1177/10731911251356150
Fairbairn, C. E., Sayette, M. A., Amole, M. C., Dimoff, J. D., Cohn, J. F., & Girard, J. M. (2015). Speech volume indexes sex differences in the social-emotional effects of alcohol. Experimental and Clinical Psychopharmacology, 23(4), 255–264. https://doi.org/10.1037/pha0000021
Gerber, A. H., Girard, J. M., Scott, S. B., & Lerner, M. D. (2019). Alexithymia – not autism – is associated with frequency of social interactions in adults. Behaviour Research and Therapy, 123, 103477. https://doi.org/10.1016/j.brat.2019.103477
Girard, J. M. (2014a). CARMA: Software for continuous affect rating and media annotation. Journal of Open Research Software, 2(1), e5–e5. https://doi.org/10.5334/jors.ar
Girard, J. M. (2014b). Perceptions of interpersonal behavior are influenced by gender, facial expression intensity, and head pose. Proceedings of the 2014 International Conference on Multimodal Interaction, 394–398. https://doi.org/10.1145/2663204.2667575
Girard, J. M. (2017). Open-source software for continuous measurement and media annotation. Proceedings of the 12th IEEE International Conference on Automated Face and Gesture Recognition (FG), 31, 995–995. https://doi.org/10.1109/fg.2017.151
Girard, J. M., Chu, W.-S., Jeni, L. A., Cohn, J. F., De La Torre, F., & Sayette, M. A. (2017). Sayette group formation task (GFT) spontaneous facial expression database. Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 581–588. https://doi.org/10.1109/fg.2017.144
Girard, J. M., & Cohn, J. F. (2011). Criteria and metrics for thresholded AU detection. Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2191–2197. https://doi.org/10.1109/iccvw.2011.6130519
Girard, J. M., & Cohn, J. F. (2015). Automated audiovisual depression analysis. Current Opinion in Psychology, 4, 75–79. https://doi.org/10.1016/j.copsyc.2014.12.010
Girard, J. M., & Cohn, J. F. (2016). A primer on observational measurement. Assessment, 23(4), 404–413. https://doi.org/10.1177/1073191116635807
Girard, J. M., Cohn, J. F., & De la Torre, F. (2015). Estimating smile intensity: A better way. Pattern Recognition Letters, 66, 13–21. https://doi.org/10/f7tkjg
Girard, J. M., Cohn, J. F., Jeni, L. A., Lucey, S., & De la Torre, F. (2015). How much training data for facial action unit detection? Proceedings of the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 1–8. https://doi.org/10.1109/fg.2015.7163106
Girard, J. M., Cohn, J. F., Jeni, L. A., Sayette, M. A., & De la Torre, F. (2015). Spontaneous facial expression in unscripted social interactions can be measured automatically. Behavior Research Methods, 47(4), 1136–1147. https://doi.org/10.3758/s13428-014-0536-1
Girard, J. M., Cohn, J. F., Mahoor, M. H., Mavadati, S. M., Hammal, Z., & Rosenwald, D. P. (2014). Nonverbal social withdrawal in depression: Evidence from manual and automatic analyses. Image and Vision Computing, 32(10), 641–647. https://doi.org/10.1016/j.imavis.2013.12.007
Girard, J. M., Cohn, J. F., Mahoor, M. H., Mavadati, S. M., & Rosenwald, D. P. (2013). Social risk and depression: Evidence from manual and automatic facial expression analysis. Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 1–8. https://doi.org/10.1109/fg.2013.6553748
Girard, J. M., Cohn, J. F., Yin, L., & Morency, L.-P. (2021). Reconsidering the Duchenne smile: Formalizing and testing hypotheses about eye constriction and positive emotion. Affective Science, 2(1), 32–47. https://doi.org/10.1007/s42761-020-00030-w
Girard, J. M., & McDuff, D. (2017). Historical heterogeneity predicts smiling: Evidence from large-scale observational analyses. Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 719–726. https://doi.org/10.1109/fg.2017.135
Girard, J. M., Shandar, G., Liu, Z., Cohn, J. F., Yin, L., & Morency, L.-P. (2019). Reconsidering the duchenne smile: Indicator of positive emotion or artifact of smile intensity? Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 594–599. https://doi.org/10.1109/acii.2019.8925535
Girard, J. M., Tie, Y., & Liebenthal, E. (2023). DynAMoS: The dynamic affective movie clip database for subjectivity analysis. Proceedings of the 11th International Conference on Affective Computing and Intelligent Interaction (ACII), 1–8. https://doi.org/10.1109/ACII59096.2023.10388135
Girard, J. M., Vail, A. K., Liebenthal, E., Brown, K., Kilciksiz, C. M., Pennant, L., Liebson, E., Öngür, D., Morency, L.-P., & Baker, J. T. (2022). Computational analysis of spoken language in acute psychosis and mania. Schizophrenia Research, 245, 97–115. https://doi.org/10.1016/j.schres.2021.06.040
Girard, J. M., & Wright, A. G. C. (2018). DARMA: Software for dual axis rating and media annotation. Behavior Research Methods, 50(3), 902–909. https://doi.org/10.3758/s13428-017-0915-5
Girard, J. M., Wright, A. G. C., Beeney, J. E., Lazarus, S. A., Scott, L. N., Stepp, S. D., & Pilkonis, P. A. (2017). Interpersonal problems across levels of the psychopathology hierarchy. Comprehensive Psychiatry, 79, 53–69. https://doi.org/10.1016/j.comppsych.2017.06.014
Girard, J. M., Yermol, D. A., Bylsma, L. M., Cohn, J. F., Fournier, J. C., Morency, L.-P., & Swartz, H. A. (2025). Dynamic and dyadic relationships between facial behavior, working alliance, and treatment outcomes during depression therapy. Journal of Consulting and Clinical Psychology.
Girard, J. M., Yermol, D. A., Salah, A. A., & Cohn, J. F. (2025). Computational analysis of expressive behavior in clinical assessment. Annual Review of Clinical Psychology. https://doi.org/10.1146/annurev-clinpsy-081423-024140
Grove, J. L., Smith, T. W., Girard, J. M., & Wright, A. G. (2019). Narcissistic admiration and rivalry: An interpersonal approach to construct validation. Journal of Personality Disorders, 33(6), 751–775. https://doi.org/10.1521/pedi_2019_33_374
Hopwood, C. J., Harrison, A. L., Amole, M. C., Girard, J. M., Wright, A. G. C., Thomas, K. M., Sadler, P., Ansell, E. B., Chaplin, T. M., Morey, L. C., Crowley, M. J., Durbin, C. E., & Kashy, D. A. (2020). Properties of the continuous assessment of interpersonal dynamics across sex, level of familiarity, and interpersonal conflict. Assessment, 27(1), 40–56. https://doi.org/10.1177/1073191118798916
Jeni, L. A., Girard, J. M., Cohn, J. F., & De la Torre, F. (2013). Continuous AU intensity estimation using localized, sparse facial feature space. Proceedings of the 10th IEEE International Conference on Automated Face and Gesture Recognition (FG), 1–7. https://doi.org/10.1109/fg.2013.6553808
Jeni, L. A., Girard, J. M., Cohn, J. F., & Kanade, T. (2015). Real-time dense 3D face alignment from 2D video with automatic facial action unit coding. IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. https://doi.org/10.1109/fg.2015.7163165
Jun, D., Girard, J. M., Martin, C. K., & Fazzino, T. L. (2025). The role of hyper-palatable foods in energy intake measured using mobile food photography methodology. Eating Behaviors, 57, 101983. https://doi.org/10.1016/j.eatbeh.2025.101983
Kebe, G. Y., Birlikci, M. D., Boudin, A., Ishii, R., Girard, J. M., & Morency, L.-P. (2024). GeSTICS: A multimodal corpus for studying gesture synthesis in two-party interactions with contextualized speech. Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents, 1–10. https://doi.org/10.1145/3652988.3673917
Kim, H., Küster, D., Girard, J. M., & Krumhuber, E. G. (2023). Human and machine recognition of dynamic and static facial expressions: Prototypicality, ambiguity, and complexity. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1221081
L’Insalata, A. M., Girard, J. M., & Fazzino, T. L. (2024). Sources of environmental reinforcement and engagement in health risk behaviors among a general population sample of US adults. International Journal of Environmental Research and Public Health, 21(11), 1390. https://doi.org/10.3390/ijerph21111390
Lin, V., Girard, J. M., & Morency, L.-P. (2020). Context-dependent models for predicting and characterizing facial expressiveness. Proceedings of the 3rd Workshop on Affective Content Analysis Co-Located with the 34th AAAI Conference on Artificial Intelligence, 2614, 11–28.
Lin, V., Girard, J. M., Sayette, M. A., & Morency, L.-P. (2020). Toward multimodal modeling of emotional expressiveness. Proceedings of the 22nd International Conference on Multimodal Interaction, 548–557. https://doi.org/10.1145/3382507.3418887
McDuff, D., & Girard, J. M. (2019). Democratizing psychological insights from analysis of nonverbal behavior. Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 220–226. https://doi.org/10.1109/acii.2019.8925503
McDuff, D., Girard, J. M., & El Kaliouby, R. (2017). Large-scale observational evidence of cross-cultural differences in facial behavior. Journal of Nonverbal Behavior, 41(1), 1–19. https://doi.org/10/f92t52
Muszynski, M., Zelazny, J., Girard, J. M., & Morency, L.-P. (2020). Depression severity assessment for adolescents at high risk of mental disorders. Proceedings of the 22nd International Conference on Multimodal Interaction, 70–78. https://doi.org/10.1145/3382507.3418859
Pacella, M. L., Girard, J. M., Wright, A. G. C., Suffoletto, B., & Callaway, C. W. (2018). The association between daily posttraumatic stress symptoms and pain over the first 14 days after injury: An experience sampling study. Academic Emergency Medicine, 25(8), 844–855. https://doi.org/10.1111/acem.13406
Rincon Caicedo, M., Girard, J. M., Punt, S. E., Giovanetti, A. K., & Ilardi, S. S. (2025). Depressive symptoms among hispanic americans: Investigating the interplay of acculturation and demographics. Journal of Latinx Psychology, 13(1), 68–84. https://doi.org/10.1037/lat0000266
Ross, J. M., Girard, J. M., Wright, A. G. C., Beeney, J. E., Scott, L. N., Hallquist, M. N., Lazarus, S. A., Stepp, S. D., & Pilkonis, P. A. (2017). Momentary patterns of covariation between specific affects and interpersonal behavior: Linking relationship science and personality assessment. Psychological Assessment, 29(2), 123–134. https://doi.org/10.1037/pas0000338
Sewall, C. J. R., Girard, J. M., Merranko, J., Hafeman, D., Goldstein, B. I., Strober, M., Hower, H., Weinstock, L. M., Yen, S., Ryan, N. D., Keller, M. B., Liao, F., Diler, R. S., Gill, M. K., Axelson, D., Birmaher, B., & Goldstein, T. R. (2021). A Bayesian multilevel analysis of the longitudinal associations between relationship quality and suicidal ideation and attempts among youth with bipolar disorder. The Journal of Child Psychology and Psychiatry, 62(7), 905–9115. https://doi.org/10.1111/jcpp.13343
Shepherd, L. M., Sly, K. F., & Girard, J. M. (2017). Comparison of comprehensive and abstinence-only sexuality education in young african american adolescents. Journal of Adolescence, 61, 50–63. https://doi.org/10.1016/j.adolescence.2017.09.006
Sprunger, J. G., Girard, J. M., & Chard, K. M. (2024). Associations between transdiagnostic traits of psychopathology and hybrid posttraumatic stress disorder factors in a trauma-exposed community sample. Journal of Traumatic Stress, 37(3), 384–396. https://doi.org/10.1002/jts.23023
Swartz, H. A., Bylsma, L. M., Fournier, J. C., Girard, J. M., Spotts, C., Cohn, J. F., & Morency, L.-P. (2023). Randomized trial of brief interpersonal psychotherapy and cognitive behavioral therapy for depression delivered both in-person and by telehealth. Journal of Affective Disorders, 333, 543–552. https://doi.org/10.1016/j.jad.2023.04.092
Vail, A. K., Girard, J. M., Bylsma, L. M., Cohn, J. F., Fournier, J., Swartz, H. A., & Morency, L.-P. (2022). Toward causal understanding of therapist-client relationships: A study of language modality and social entrainment. Proceedings of the 24th ACM International Conference on Multimodal Interaction, 487–494. https://doi.org/10/gt3bwj
Vail, A. K., Girard, J. M., Bylsma, L. M., Fournier, J., Swartz, H. A., Cohn, J. F., & Morency, L.-P. (2023). Representation learning for interpersonal and multimodal behavior dynamics: A multiview extension of latent change score models. Proceedings of the 25th International Conference on Multimodal Interaction, 517–526. https://doi.org/10.1145/3577190.3614118
Vail, A. K., Girard, J. M., Bylsma, L., Cohn, J. F., Fournier, J., Swartz, H. A., & Morency, L.-P. (2021). Goals, tasks, and bonds: Toward the computational assessment of therapist versus client perception of working alliance. Proceedings of the 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 1–8. https://doi.org/10/gpfjrn
Valstar, M. F., Almaev, T., Girard, J. M., McKeown, G., Mehu, M., Yin, L., Pantic, M., & Cohn, J. F. (2015). FERA 2015 - second facial expression recognition and analysis challenge. Proceedings of the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 1–8. https://doi.org/10.1109/fg.2015.7284874
Valstar, M. F., Sanchez-Lozano, E., Cohn, J. F., Jeni, L. A., Girard, J. M., Zhang, Z., Yin, L., & Pantic, M. (2017). FERA 2017 - addressing head pose in the third facial expression recognition and analysis challenge. Proceedings of the 12th IEEE Conference on Automatic Face and Gesture Recognition (FG), 839–847. https://doi.org/10.1109/fg.2017.107
van Oest, R., & Girard, J. M. (2022). Weighting schemes and incomplete data: A generalized Bayesian framework for chance-corrected interrater agreement. Psychological Methods, 27(6), 1069–1088. https://doi.org/10.1037/met0000412
Wolfert, P., Girard, J. M., Kucherenko, T., & Belpaeme, T. (2021). To rate or not to rate: Investigating evaluation methods for generated co-speech gestures. Proceedings of the 23rd International Conference on Multimodal Interaction, 494–502. https://doi.org/10.1145/3462244.3479889
Zhang, X., Yin, L., Cohn, J. F., Canavan, S., Reale, M., Horowitz, A., Liu, P., & Girard, J. M. (2014). BP4D-spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing, 32(10), 692–706. https://doi.org/10.1016/j.imavis.2014.06.002
Zhang, Z., Girard, J. M., Wu, Y., Zhang, X., Liu, P., Ciftci, U., Canavan, S., Reale, M., Horowitz, A., Yang, H., Cohn, J. F., Ji, Q., & Yin, L. (2016). Multimodal spontaneous emotion corpus for human behavior analysis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3438–3446. https://doi.org/10.1109/cvpr.2016.374