Abstract
Public datasets have played a significant role in advancing the state-of-the-art in automated facial coding. Many of these datasets contain posed expressions and/or videos recorded in controlled lab conditions with little variation in lighting or head pose. As such, the data do not reflect the conditions observed in many real-world applications. We present AM-FED+ an extended dataset of naturalistic facial response videos collected in everyday settings. The dataset contains 1,044 videos of which 545 videos (263,705 frames or 21,859 seconds) have been comprehensively manually coded for facial action units. These videos act as a challenging benchmark for automated facial coding systems. All the videos contain gender labels and a large subset (77 percent) contain age and country information. Subject self-reported liking and familiarity with the stimuli are also included. We provide automated facial landmark detection locations for the videos. Finally, baseline action unit classification results are presented for the coded videos.
Authors
Daniel McDuff; May Amr; Rana el Kaliouby
Publication available on IEEE Explore.