Official sponsor

aaac-AVEC2015

Official sponsor

audeering-AVEC2016

Challenge Guidelines

Many details about the data, performance measures, baseline features and prediction results are included in the baseline paper, which is now available for download. Please note that the content of this paper may change until the camera ready deadline.

To get started

Please contact Dr Fabien Ringeval by email (fabien.ringeval@tum.de) to register your team. This email should include:

  • Team name
  • Team members, including affiliation
  • Team leader
  • Signed EULA

The list of team member must be the same as those provided on the EULA (see below), and it is not allowed to share the data with other people, including lab mates. Upon registering your team you will be given a username and a password to access the files.

The EULA can be downloaded from the RECOLA website.

After downloading the data you can directly start your experiments with the train and development sets. Once you found your best method you should write your paper for the Workshop. At the same time you can compute your results per instance of the test set and upload them. We will then let you know your performance result. See below for more information on the submission process and the way performance is measured.

Challenge data

The organisers provide the following data:

  • Audio data (PCM/wav)
  • Electrocardiogram data (raw/csv)
  • Electrodermal data (raw/csv)
  • Video data (h264/mp4)
  • Dimensional affective labels (arousal, valence) for the train and development partitions.
    • Individual labels per recording from each rater (6)
    • Averaged labels per recording from all 6 raters (gold standard used for performance evaluation).

Results submission

Each participant has up to five submission attempts. You can submit results until the final results deadline, which is before the camera ready version deadline. Your best results will be used in determining the winner of the challenge. Please send submissions by email to Fabien Ringeval.

Important: the top-two performers of the challenge will be asked to submit their program to us (Fabien Ringeval, Technische Univ. München, Björn Schuller, Imperial College London / Univ. Passau) to verify the results, both on the original test set and extra hold-out data. The program may be delivered (partly) as an executable or e.g. encrypted Matlab code, and we will endeavour to cater for all possible variations in operating systems etc., but we do ask you to be available in the period of 14 September – 5 October to work with our team in validating your results.

Participants’ results should be sent as a single zip to the organisers by email (fabien.ringeval@tum.de). The zip file should include the name of your team, and the number of this attempt, e.g. results_TUM_1.zip. The zip file should contain only two directories: “arousal” and “valence”.

The data in the results files themselves should be formatted the same way as the training/development gold standard label files, that is, one ARFF file per subject, containing three attributes: Instance_name, frameTime and the prediction in ASCII values. Their filenames should also be formatted in exactly the same way.

The organisers will provide for each dimension the Concordance Correlation Coefficient, which will be used to rank participants. The Pearson’s correlation coefficient and the RMS error will be provided as well, which can be used by the authors to further discuss their results in the paper accompanying their submission. Those metrics of performance will be computed on the concatenation of all development/test instances (i.e., gold standard and prediction).

Paper submission

All papers must be formatted according to ACM proceedings style, and should be no more than 8 pages long. Reviewing will be double-blind. Latex and word templates for this format can be downloaded from the following links:

Latex files (Sample file, Style file, PDF sample file Bib Sample file, and Graph sample file)

Word files

Formatting instructions for other tools

In your submission, please refer to the baseline paper for details about the dataset and baseline results. This makes for a more readable set of papers, compared to each workshop paper repeating the same information. The baseline paper will be made available on the 30th of April – read it here.

Papers must be submitted as PDF files in Letter size through the online paper submission system. Submissions must strictly adhere to page limits. Papers exceeding the page limits will be rejected without review. The maximum allowed file size is 10 MB.

Papers should be submitted through the AV+EC 2015′s easychair submission site.

Camera-ready papers should be submitted through the ACM-MM submission service, which is run by Sheridan Publishing. Authors of accepted papers should receive an email from Sheridan with a direct link for submission (please contact us ASAP if you have not received such an email). Please take care to follow Sheridan’s guidelines for submission, and note that the deadline for AV+EC papers is 31 July.

Registration Information

Please register using the ACM Multimedia main conference website. Participants to AV+EC 2015 simply register using the ACM-MM registration services.

The workshops are included in the Full ACM/MM registration, but if you want to attend the workshop only, you can register for a single day.

Frequently asked questions/made comments:

I have already downloaded the RECOLA database, shall I send another EULA for AV+EC 2015? Yes please. The username and password used for RECOLA won’t be the same for accessing the AV+EC 2015 dataset, moreover, you may want to change the names of the lab mates that will use the data.

How many times can I submit my results? You can submit results five times. We will not count badly formatted results towards one of your five submissions.

I’ve submitted a set of results, when will I receive my scores? During an active challenge we strive to return scores within 24hrs during typical working days (Monday – Friday), however please try to be patient as this is subject to workload. Under no circumstances should you spam the organisers as this simply delays the process for all teams.

Can I have access to the mailing list of participants? Can you tell me the results of other teams? Absolutely not – registrations and results are not for public view. If there is sufficient demand, we may consider offering a separate opt-in mailing list for teams to discuss the challenge.

Can we use other datasets to pre-train the model? The use of external sources is allowed. Any other dataset can be used for training your models. Please however consider the description of these additional datasets when writing your paper.

Can we use the development and test data during training? Both training and development partitions can be used for training your models, as well as additional resources – see above. The test partition, for which the labels are unknown, must solely be used for testing your system. It is strictly forbidden to perform any annotation on the test partition! The top-two performers of the challenge will be asked to submit their program to us to verify the results, both on the original test set and extra hold-out data.

I have submitted my results on the test partition, can I know my rank now? The rank of the participants will be announced at the end of the Workshop day, which means that, in order to know your rank, you need to write a paper describing your system and your results, get it accepted by the Technical Program Committee, and present it during the workshop.

Will we be ranked with the mean of our best attempt, or will you first select our best scores for the Arousal and Valence independently and then compute a mean? The ranking will be based on the mean of the best CCC score obtained on arousal and valence independently of the attempt. Comments on why a specific architecture might work significantly better than another are thus strongly encouraged in case such observations are valid.