Facial Expression Recognition and Analysis Challenge 2015

Call for papers:


Michel Valstar        

University of Nottingham, UK



Gary McKeown        

Queen’s University Belfast, UK



Marc Méhu        

Webster University Vienna, AT



Lijun Yin        

Binghamton University, USA



Maja Pantic        

Imperial College London, UK



Jeff Cohn        

U Pitssburgh/Carnegie Mellon U, USA



Most Facial Expression Recognition and Analysis systems proposed in the literature focus on the binary occurrence of expressions, often either basic emotions or FACS Action Units (AUs). In reality, expressions can vary greatly in intensity, and this intensity is often a strong cue for the interpretation of the meaning of expressions. In addition, despite efforts towards evaluations standards (e.g. FERA 2011), there still is a need for more standardised evaluation procedures. They therefore suffer from low comparability. This is in stark contrast with more established problems in human behaviour analysis from video such as face detection and face recognition. Yet at the same time, this is a rapidly growing field of research, due to the constantly increasing interest in applications for human behaviour analysis, and technologies for human-machine communication and multimedia retrieval.

In these respects, the FG 2015 Facial Expression Recognition and Analysis challenge (FERA2015) shall help raise the bar for expression recognition by challenging participants to estimate AU intensity, and it will continue to bridge the gap between excellent research on facial expression recognition and low comparability of results. We do this by means of three selected tasks: the detection of FACS Action Unit (AU) occurrence, the estimation of AU intensity for pre-segmented data (i.e. when AU occurrence is known), and fully automatic AU intensity estimation (i.e. for every frame when AU occurrence is not known).

The data used will be provided by the organisers, and is derived from two high-quality, non-posed databases: BP4D and SEMAINE. It consists of recordings of people displaying a range of expressions.

The corpus is provided as three strictly divided partitions: one training, one development, and one test partition. The test partition is constructed such that participating systems will be evaluated on both subject-dependent data and subject-independent data, as both situations occur frequently in real-life settings. The test partition will be held back by the organisers. Participants will send in their working programs and the organisers will run these on the test data. The training and development partitions will be available to the participants immediately. The reason for having a development partition is to allow participants to run a large number of studies on a commonly defined part of the fully labelled available data – it is perfectly fine to use the evaluation partition in training the final versions of a participants’ entry.

Please see the evaluation guidelines to which all participants in the challenge should adhere. Baseline features will be released, as well as benchmark results of two basic approaches will be provided, which will act as a baseline. The first uses Local Gabor Binary Patterns from Three Orthogonal Planes features (LGBP-TOP), PCA and Support Vector Machines, and the second uses tracked fiducial facial points, PCA and Support Vector machines.

Three FACS AU based sub-challenges are addressed:

  • Occurrence Sub-Challenge: Action Units have to be detected on frame basis.
  • Pre-Segmented Intensity Sub-Challenge, the segments of AU occurrence can be assumed to be known, and the goal is to predict the intensity level for each frame when an AU occurs
  • Fully Automatic Intensity Sub-Challenge, both the occurrence and intensity of AUs must be predicted for each frame of a video

In addition to participation to the challenge, we invite papers that address the benchmarking of facial expressions, open challenges in facial expression recognition in general, and FACS AU analysis in particular.

The organisers will write a paper outlining the challenge’s goals, guidelines, and baseline methods for comparison. This baseline paper, including the results of the baseline method on the test partitions, will be available from the challenge’s website from end October, 2014. To save space, participants are kindly requested to cite this paper instead of reproducing the results of that paper.

Both sub-challenges allow participants to extract their own features and apply their own classification algorithm on these. The labels of the test set will remain unknown throughout the competition, and participants will need to adhere to the specified training and parameter optimisation procedure. Participants will be allowed only three trials to upload their programs for evaluation on the test set. After each upload of results, the organisers will inform the participants by email about the performance attained by their system.

Programme committee:

Jeff Girard,
Pittsburgh University, USA
Hatice Gunez,
Queen Mary University London, UK
Qiang Ji,
Rensselaer Polytechnic Institute, USA
Brais Martinez,
University of Nottingham, UK
Louis-Philippe Morency,
Carnegie Mellon University, USA
Ognjen Rudovic,
Imperial College London, UK
Nicu Sebe,
University of Trento, IT
Yorgos Tzimiropoulos,
University of Nottingham, UK

Paper submission:

Please submit your paper through CMT: https://cmt.research.microsoft.com/FERA2015

Each participation will be accompanied by a paper presenting the results of a specific approach to facial expression analysis that will be subjected to a peer-review process. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the challenge. Participants are encouraged to compete in both sub-challenges. Please note that the first system submission must be done two weeks before the submission deadline to allow us to set up your code! Please read the challenge guidelines carefully for more technical information about the challenge.

Papers should be 6 pages, and should adhere to the same formatting guidelines as the main FG2015 conference. The review process will be double blind.

Important dates:

please see the specific important dates page for this.

The results of the Challenge will be presented at the Facial Expression Recognition and Analysis Challenge 2015 Workshop to be held in conjunction with Automatic Face and Gesture Recognition 2015 in Ljubljana, Slovenia. Prizes will be awarded to the two sub-challenge winners.

If you are interested and planning to participate in the FERA2015 Challenge, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:


Thank you and best wishes,

Michel Valstar, Jeff Cohn, Lijun Yin, Gary McKeown, Marc Méhu, and Maja Pantic


Dr. Michel Valstar


University of Nottingham

School of Computer Science

Jubilee Campus, Nottingham

United Kingdom