AVEC 2017 menu


This dataset is a subset of the GEMEP corpus (see T. Banziger and K. R. Scherer. `Introducing the geneva multimodal
emotion portrayal (gemep) corpus’, in `Blueprint for Affective Computing: A
Sourcebook, Series in affective science’, chapter 6.1, pages 271–294). It was used as the dataset for the first Facial Expression Recognition and Analysis challenge (FERA2011, Valstar, Jiang, Mehu, Pantic, and Scherer, ‘The First Facial Expression Recognition and Analysis Challenge’, IEEE Int’l. Conf. Face and Gesture Recognition (FG’11), March 2011). It continues to be available to scientist who wish to benchmark their AU detection and expressions of discrete emotion recognition systems. To do so, please download the test data from the database, create the predictions according to the guidelines specified by the challenge (http://sspnet.eu/fera2011/news/), and send the results to michel.valstar@imperial.ac.uk to have your scores calculated.

  • year: 2011
  • url: http://gemep-db.sspnet.eu/
  • main_author: Klaus Scherer
  • license: Free for scientific benchmarking of AU and expressions of discrete emotion purposes only
  • subjects: 7
  • recordings: 242
  • duration: 5 seconds per clip
  • naturality: roleplay
  • media: Video only
  • language: Fantasy/nonsense
  • interaction: None, single person
  • annotation: frame-based AU activation, and clip-based discrete emotion for the respective sub-sets in their entirety

Categories: face-analysis

4 comments to GEMEP-FERA

  • Dear Michel

    Would it be possible to use this database for our work on discrete emotion recognition systems at LIMSI? Is it also possible to download (in another site ?) the audio of the same subset of GEMEP?

    Thank you,
    All the best,
    Laurence Devillers

  • mvalstar

    Dear Laurance,

    Yes, access is possible through http://gemep-db.sspnet.eu . My old Imperial email wasn’t properly forwarding emails to my new Nottingham account, but it’s sorted now.

    The audio is not available in this dataset. However, the original does have audio, and that set can be requested independently from Marc Méhu and Marcello Mortillaro of Klaus Scherer’s group.

    All the best,


  • xiao zhang

    could u please tell me how to get the emotion and au labels of the test set?


  • Zicheng Lin

    Hi,my name is Zicheng Lin, a master student of Jinan University. I am interested in Facial Expression. Would it possible to use this database for my study?

Leave a Reply




You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>