FERA 2017

Header_forWeb

 

News: Baseline paper released. 
News: 
Paper submission is now open.

News: Call for Papers for Special Issue in the International Journal on Image and Vision Computing now out, topic: Reliable Facial Coding. Deadline: 15 May 2017.
News: Submission deadline extended to 22 February 2017

Call for participation:

Organisers

Michel Valstar        

University of Nottingham, UK

michel.valstar@nottingham.ac.uk

 

Jeff Cohn        

U Pitssburgh/Carnegie Mellon U, USA

jeffcohn@cs.cmu.edu

 

Lijun Yin        

Binghamton University, NY, USA

lijun@cs.binghamton.edu

 

Maja Pantic        

Imperial College London, UK

m.pantic@imperial.ac.uk

Scope

Most Facial Expression Recognition and Analysis systems proposed in the literature focus on the binary occurrence of expressions from frontal-view images, often either basic emotions or FACS Action Units (AUs). In reality, faces are often captured at an angle from the camera, and expressions can vary greatly in intensity, which intensity is often a strong cue for the interpretation of the meaning of expressions. In addition, despite efforts towards evaluations standards (e.g. FERA 2011, FERA 2015), there still is a need for benchmark datasets with a higher volume of data. Face analysis is a rapidly growing field of research, due to the constantly increasing interest in applications for human behaviour analysis, and technologies for human-machine communication and multimedia retrieval, and as such more effort in this area is highly valuable.

In these respects, the FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) shall help raise the bar for expression recognition by challenging participants to estimate AU occurrence and intensity in images of 9 different angles of the face, and it will continue to bridge the gap between excellent research on facial expression recognition and low comparability of results. We do this by means of two selected tasks: the detection of FACS Action Unit (AU) occurrence, and the fully automatic estimation of AU intensity (i.e. for every frame, without prior knowledge of AU occurrence).

The data used will be provided by the organisers, and is derived from a high-quality, non-posed databases: BP4D. It consists of recordings of people displaying a range of expressions.

The corpus is provided as three strictly divided partitions: one training, one development, and one test partition. The test partition is constructed such that participating systems will be evaluated on both subject-dependent data and subject-independent data, as both situations occur frequently in real-life settings. The test partition will be held back by the organisers. Participants will send in their working programs and the organisers will run these on the test data. The training and development partitions will be available to the participants immediately. The reason for having a development partition is to allow participants to run a large number of studies on a commonly defined part of the fully labelled available data – it is perfectly fine to use the evaluation partition in training the final versions of a participants’ entry.

Please see the evaluation guidelines to which all participants in the challenge should adhere. Baseline features will be released, as well as benchmark results of two basic approaches will be provided, which will act as a baseline.

Two FACS AU based sub-challenges are addressed:

  • Occurrence Sub-Challenge: Action Units have to be detected on frame basis in 9 different facial views.
  • Fully Automatic Intensity Sub-Challenge, both the occurrence and intensity of AUs must be predicted for each frame of a video in 9 different facial views.

In addition to participation to the challenge, we invite papers that address the benchmarking of facial expressions, open challenges in facial expression recognition in general, and FACS AU analysis in particular.

The organisers will write a paper outlining the challenge’s goals, guidelines, and baseline methods for comparison. This baseline paper, including the results of the baseline method on the test partitions, will be available from the challenge’s website from early January, 2017. To save space, participants are kindly requested to cite this paper instead of reproducing the results of that paper.

Both sub-challenges allow participants to extract their own features and apply their own classification algorithm on these. The labels of the test set will remain unknown throughout the competition, and participants will need to adhere to the specified training and parameter optimisation procedure. Participants will be allowed only three trials to upload their programs for evaluation on the test set. After each upload of results, the organisers will inform the participants by email about the performance attained by their system.

Programme committee:

Tadas Baltrusaitis, Carnegie Mellon University (USA)

Zakia Hamal, Carnegie Mellon University (USA)

Qiang Ji, Rensselaer Polytechnic Institute, USA

Mohamed Mahoor, Denver University, USA

Daniel McDuff, Affectiva (USA)

Marc Méhu, Webster Vienna Private University, (AU)

Louis-Philippe Morency, Carnegie Mellon University (USA)

Marcello Mortillaro, University of Geneva (CH)

Ioannis Patras, Queen Mary University of London (UK)

Georgios Tzimiropoulos, University of Nottingham (UK)

Jacob Whitehill, Worcester Polytechnic Institute (USA)

Stefanos Zafeiriou, Imperial College London (UK)

Paper submission:

Please submit your paper through CMT: https://cmt3.research.microsoft.com/FERA2017

Each participation will be accompanied by a paper presenting the results of a specific approach to facial expression analysis that will be subjected to a peer-review process. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the challenge. Participants are encouraged to compete in both sub-challenges. Please note that the first system submission must be done two weeks before the submission deadline to allow us to set up your code! Please read the challenge guidelines carefully for more technical information about the challenge.

Papers should be 6 pages, and should adhere to the same formatting guidelines as the main FG2017 conference. The review process will be double blind.

Important dates:

please see the specific important dates page for this.

The results of the Challenge will be presented at the Facial Expression Recognition and Analysis Challenge 2017 Workshop to be held in conjunction with Automatic Face and Gesture Recognition 2017 in Washington, DC, USA. Prizes will be awarded to the two sub-challenge winners.

If you are interested and planning to participate in the FERA2017 Challenge, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:

http://sspnet.eu/fera2017

Thank you and best wishes,

Michel Valstar, Jeff Cohn, Lijun Yin, and Maja Pantic

Contact:

Dr. Michel Valstar

Michel.Valstar@nottingham.ac.uk

University of Nottingham

School of Computer Science

Jubilee Campus, Nottingham

United Kingdom