We are calling for participation in FERA 2017, the 3d automatic facial expression analysis challenge, held as a workshop of FG 2017 in Washington, D.C., on either 30 May or 3 June 2017 (T.B.C.). The challenge will focus on pose-independent AU occurrence detection and intensity estimation. For full information, see http://sspnet.eu/fera2017/
The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) shall help raise the bar for expression recognition by challenging participants to estimate AU occurrence and intensity in images of 9 different angles of the face, and it will continue to bridge the gap between excellent research on facial expression recognition and low comparability of results. We do this by means of two selected tasks: the detection of FACS Action Unit (AU) occurrence, and the fully automatic estimation of AU intensity (i.e. for every frame, without prior knowledge of AU occurrence).
The data used will be provided by the organisers, and is derived from a high-quality, non-posed databases: BP4D. It consists of recordings of people displaying a range of expressions.
The corpus is provided as three strictly divided partitions: one training, one development, and one test partition. The test partition is constructed such that participating systems will be evaluated on both subject-dependent data and subject-independent data, as both situations occur frequently in real-life settings. The test partition will be held back by the organisers. Participants will send in their working programs and the organisers will run these on the test data. The training and development partitions will be available to the participants immediately. The reason for having a development partition is to allow participants to run a large number of studies on a commonly defined part of the fully labelled available data – it is perfectly fine to use the evaluation partition in training the final versions of a participants’ entry.
Two FACS AU based sub-challenges are addressed:
Occurrence Sub-Challenge: Action Units have to be detected on frame basis in 9 different facial views.
Fully Automatic Intensity Sub-Challenge, both the occurrence and intensity of AUs must be predicted for each frame of a video in 9 different facial views.