FG 2011 Facial Expression Recognition and Analysis Challenge (FERA2011)

FERA2011

The first facial expression recognition and analysis challenge (FERA2011) was held in conjunction with the IEEE International conference on Face and Gesture Recognition  2011, Santa Barbara California, on Friday the 25th of March 2011. With 16 submissions, 11 accepted papers, and over 90 attendees of the actual workshop in Santa Barbara, the challenge can be considered a great success.

GEMEP-FERA benchmark

The GEMEP-FERA dataset, used in the FERA2011 challenge, is still available from http://gemep-db.sspnet.eu. The benchmark procedure for the test set remains the same as for the original challenge. The procedure is explained in http://sspnet.eu/fera2011/news/. Results can be sent by email to michel.valstar@imperial.ac.uk .

Baseline paper

The challenge justification, protocol, and data together with baseline results are described in a baseline paper that was published in the FG’11 proceedings. The reference to this paper is:

Michel F. Valstar, Bihan Jiang, Marc Méhu, Maja Pantic, and Klaus Scherer, “The First Facial Expression Recognition and Analysis Challenge”, in Proc. IEEE Int’l Conf. Automatic Face and Gesture Recognition, 2011

Secondary test for emotion sub-challenge

To increase the objectivity and fairness of the challenge, the organisation has found it necessary to organise a second test for the emotion sub-challenge, which was conducted by the organisers on a second, smaller test set. Participants had the option to  either send their programmes to the organisers for evaluation, or to bring their programmes to the FG conference where their programmes were evaluated on-site. Three participants (including the emotion sub-challenge winners) send us their programs, and seven teams chose to have their algorithms evaluated on-site. Only the team of Queensland University of Technology failed to perform the secondary test, for unknown reasons.

Results on the secondary test did not influence the competition ranking, but were made public during the workshop in Santa Barbara, on this website, and in any follow-up papers describing the challenge’s results.

Results

Below are posted the results of the FERA2011 AU detection and emotion recognition sub-challenges. For both, we list the results of the person independent, person specific, and overall test partitions. For the emotion detection sub-challenge, we also list the overall results of the secondary challenge.

F1-measure of AU detection, calculated on a per-frame level
Team Person independent Person specific Overall
ISIR 0.63 0.58 0.62
UCSD 0.60 0.54 0.58
KIT 0.54 0.47 0.52
QUT 0.53 0.46 0.51
MIT-Cambridge 0.47 0.42 0.46
Baseline 0.45 0.42 0.45

For a number of participants we also have the corresponding 2AFC score:

2AFC-measure of AU detection, calculated on a per-frame level
Team Person independent Person specific Overall
ISIR 0.763 0.751 0.752
UCSD 0.759 0.753 0.758
TAUD 0.677
KIT 0.653 0.628 0.646
Baseline 0.631 0.611 0.628

And for the emotion recognition sub-challenge:

Classification rate of discrete emotion recognition, calculated on a per-video level (event based)
Team Person independent Person specific Overall
UC Riverside 0.75 0.96 0.84
UIUC-UMC 0.66 1.0 0.80
KIT 0.66 0.94 0.77
UCSD-CERT 0.71 0.84 0.76
ANU 0.65 0.84 0.73
UC Riverside-2 0.71 0.87 0.72
UCL 0.61 0.84 0.70
U. Montreal 0.58 0.87 0.70
N.U. Singapore 0.64 0.73 0.67
Queensland University of Technology 0.62 0.55 0.60
Baseline 0.44 0.73 0.56
Vrije Universiteit Brussels 0.46 0.34 0.56
MIT-Cambridge 0.45 0.43 0.44
U. Oulo 0.32 0.12 0.24

Finally, the results of the secondary test. The order of the teams/institutes is the same as in the primary test results above. Please note that the team of Queensland University of Technology decided for unknown reasons not to participate in the secondary test. Please also note that a number of participants did not submit a paper, and were thus not asked to perform the secondary test.

Results of secondary emotion test (same units as table above)
Team Overall
UC Riverside 0.86
UIUC-UMC 0.78
KIT 0.76
UCSD-CERT 0.64
ANU 0.70
UC Riverside-2 0.74
UCL 0.70
U. Montreal 0.96
N.U. Singapore 0.6
Queensland University of Technology
MIT-Cambridge 0.48

The results are also included in the publicly available slides of the meta-analysis presentation given by Michel Valstar during the workshop.

Organisers:

Michel Valstar (Imperial College London, UK)

Marc Méhu (UNIGE, Switzerland)

Maja Pantic (Imperial College London, UK / Twente University, The Netherlands)

Klaus Scherer (Unige, Switzerland)

Sponsors:

SSPNet Network of Excellence