Challenge Guidelines

Many details about the data, performance measures, and baseline features and prediction results are included in the baseline paper. Note that this paper is work in progress. While it may not be complete until the end, we aim to keep it factually correct at all times.

Register your team

Please start by registering your team, by emailing the following details to Enrique Sánchez Lozano:

  • Team name
  • Team short name
  • All team member names, email addresses, and their affiliations
  • A team leader

Download the data

Create an account to download the training and development partitions from the FERA 2015 data website. This is the same URL and webserver as that used for FERA 2015. However, for FERA 2017 you only need to download the files that contain ‘fera2017′ as part of their filename.

Details of the datasets

The FERA 2017 dataset has been directly derived from the BP4D dataset (Zhang et al. 2014). The BP4D dataset includes forty-one participants, recorded performing a set of eight different tasks, meant to naturally generate espontaneous facial expressions. The dataset has been annotated following the Facial Action Coding System (FACS), and 2D and 3D models are generated for each of the video recordings.

Using the 3D models, a set of 9 different views have been generated for each of the video sequences, in order to generate the same videos, under different camera views. Therefore, the scope of the FERA 2017 challenge is to recognise facial expressions (either occurrence or intensity) accross different views.

The training partition of FERA 2017 consists of the forty-one subjects included in the original BP4D, whereas the validation partition has been created with an additional set of 20 subjects (7 women and 13 men). Details of the Test partition T.B.C.

The Action Units that will be used for the Occurrence sub-challenge are AU1, AU4, AU6, AU7, AU10, AU12, AU14, AU15, AU17, and AU23, whereas the Actions Units that will be used for the Intensity sub-challenge are AU1, AU4, AU6, AU10, AU12, AU14 and AU17. While the Occurrence sub-challenge is a binary classification problem, the Intensity sub-challenge is meant to evaluate the ordinal intensity of facial expressions, ranged from 0 to 5.

 

For the Occurrence sub-challenge, the evaluation measure will be the F1-score, along with the 2AFC and the Accuracy. The final ranking of participants’ system will be based only on the F1-score: that is to say, labels of all test sequences will be concatenated into a sequence to calculate the F1 score per AU.

For the Intensity sub-challenge, the Intra-Class Correlation coefficient will be used. Again, the final score will be given as a result of concatenating the whole set of labels into a single vector.

The final score for each sub-challenge will be given as the average F1/ICC score across all AUs.

It is important to remark that the evaluation will be view-independent. That is to say, participants’ systems will be run in the whole set of videos independently, without any prior information of which view has been used to generate each specific video. Although participants’ will be given performance per view, the goal of the challenge is to work towards systems capable of detecting facial expressions under unconstrained settings, i.e., under a wide range of views.

 

Participant System Submission

In FERA 2017, all participants must send in their fully working program. The program must be able to process data of the same format as the training and development partitions. The program should receive as input a path to the video, and then return the predictions for each of the video frames in a .csv file, using the same format given for the training and development partitions. Other inputs, such as models and/or parameters, are also permitted. In any case, participants should include a short description about how to properly run the program.

Because of this flexibility, we ask you to work with us to get your programs working on our systems. We accept both compiled and source code entries. Please submit your first version two weeks before the paper submission deadline, to allow us time to set up your code. After your first submission, your scores will be sent to you within 48 hours of submission of a new system.

Participants can submit up to five systems. This is regardless of whether they are small tweaks of the same approach, or wildly different approaches. Any low-level comparisons and parameter optimisations should be done using the training and development partitions.

 

Baseline System

The baseline system utilises geometric features derived from tracked facial points. Both the tracked points and the features will be made publicly available, so that participants can reproduce the experiments shown in the baseline paper. However, participants should note that the submitted programs will need to run as standalone applications, and therefore, should they use the provided features, they will need to incorporate the code to extract the points and features within their systems.

The points have been detected and tracked using the Cascaded Continuous Regression facial point detector/tracker (Sánchez-Lozano et al., 2016), which have been made publicly available, and can be downloaded from here. The code to extract the geometric features from the points will be posted here soon.