AVEC 2017 menu

FERA 2017

Header_forWeb

 

News: Workshop date is 3 June, in the afternoon.

News: Baseline paper released.

News: Call for Papers for Special Issue in the International Journal on Image and Vision Computing now out, topic: Reliable Facial Coding. Deadline: 15 May 2017.

Results:

Below you can find the results attained by the FERA 2017 participants, for each of the subchallenges. FERA 2017 remains open for evaluation: please follow the instructions in the ‘challenge guidelines’ page as usual. If, after evaluation, you would like your scores to be listed in the table below, please send the organisers a message. Please note that we will only list results that have been described in an accepted, peer-reviewed paper, so please include a reference to such a paper in your message.

Results of Occurrence sub-challenge (F1)
Team Overall Reference
Seualip 0.574  Chuangao et al., ‘View-Independent Facial Action Unit Detection’
CIRBNU 0.507  He et al. ‘Multi View Facial Action Unit Detection based on CNN and BLSTM-RNN’
Magos 0.506  Bellon et al., ‘AUMPNet: simultaneous Action Units detection and intensity estimation on multipose facial images using a single convolutional neural network’
RUCEMoSensor 0.495  Jin et al., ‘Facial Action Units Detection with Multi-Features and -AUs Fusion’
Baseline 0.452  Valstar et al., ’FERA 2017 – Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge’

 

Results of Intensity sub-challenge (ICC)
Team Overall Reference
HKUST 0.445  Pi et al., ‘Pose-independent Facial Action Unit Intensity Regression Based on Multi-task Deep Transfer Learning’
Magos 0.399  Bellon et al., ‘AUMPNet: simultaneous Action Units detection and intensity estimation on multipose facial images using a single convolutional neural network’
ULM 0.295  Kächele et al. ‘Support Vector Regression of Sparse Dictionary-based Features for View-Independent Action Unit Intensity Estimation’
Baseline 0.217  Valstar et al., ’FERA 2017 – Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge’

Contact:

Dr. Michel Valstar

Michel.Valstar@nottingham.ac.uk

University of Nottingham

School of Computer Science

Jubilee Campus, Nottingham

United Kingdom