Official Sponsor

Sponsored by ARIA-VALUSPA

Official Sponsor


AVEC 2017 menu

AVEC 2016


Get started by following the registration instructions and reading the baseline paper (Updated: 27/05/2016) describing the challenge!
AVEC 2016 is a satellite workshop of ACM-Multimedia 2016, Sunday 16 October, ‘Roeterseilandcomplex’, University of Amsterdam, The Netherlands


Michel Valstar

University of Nottingham, UK


Jonathan Gratch



Björn Schuller

ICL/U Passau,


Fabien Ringeval        

University of Passau, Germany


Roddy Cowie        

Queen’s University Belfast, UK


Maja Pantic        

Imperial College London, UK


The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) “Depression, Mood and Emotion” will be the sixth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological depression and emotion analysis, with all participants competing under strictly the same conditions.

The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to depression and emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance depression and emotion recognition systems to be able to deal with fully naturalistic behaviours in large volumes of un-segmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.

We are calling for teams to participate in a Challenge of fully-continuous depression and emotion detection from audio, or video, or physiological data, or any combination of these three modalities. As benchmarking database the SimSensei corpus of human-agent interactions will be used for the depression sub-challenge, and the RECOLA multimodal corpus of remote and collaborative affective interactions will be used for the Emotion sub-challenge. Both Depression and Emotion will have to be recognised in terms of continuous time and continuous value.

Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video, and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.

Please visit our website for more information:

Program Committee

Felix Burkhardt, Deutsche Telekom, Germany

Rama Chellappa, University of Maryland, USA

Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), France

Julien Epps, University of New South Wales, Australia

Anna Esposito, University of Naples, Italy

Hatice Gunez, Cambridge University, UK

Qiang Ji, Rensselaer Polytechnic Institute, USA

Jarek Krajewski, Universität Wuppertal, Germany

Marwa Mahmoud, Cambridge University, UK

Marc Mehu, Webster Vienna Private University, Austria

Stefan Scherer, University of Southern California, USA

Stefan Steidl, Uinversität Erlangen-Nuremberg, Germany

Matthew Turk, University of California, USA

Lijun Yin, Binghamton University, USA

Stefanos Zafeiriou, Imperial College London, UK

Guoying Zhao, University of Oulu, Finland

Important DatesChallenge opening: 30 March, 2016Paper submission: 1 July, 2016

Notification of acceptance: 31 July, 2016

Camera ready paper (hard deadline): 16 August, 2016

Workshop: Sunday 16 October, 2016

Topics include, but are not limited to: Participation in the ChallengeAudio/Visual/Physiological Emotion Recognition

  • Audio-based Depression/Emotion Recognition
  • Video-based Depression/Emotion Recognition
  • Physiology-based Emotion Recognition
  • Synchrony of Non-Stationary Time Series
  • Multi-task learning of Multiple Dimensions
  • Weakly Supervised Learning
  • Agglomeration of Learning Data
  • Context in Depression/Emotion Recognition
  • Multiple Rater Ambiguity and Asynchrony


  • Multimedia Coding and Retrieval
  • Usability of Audio/Visual/Physiological Depression/Emotion Recognition
  • Real-time Issues
Submission PolicyIn submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.
Manuscripts should follow the ACM MM 2016 paper format. Authors should submit papers as a PDF file. Submission will be via easychair.
Papers accepted for the workshop will be allocated 8 pages in the proceedings of ACM MM 2016.
AVEC 2016 reviewing is double blind. Reviewing will be by members of the program committee. Each paper will receive at least three reviews. Acceptance will be based on relevance to the workshop, novelty, technical quality, and performance when the submission concerns a participation in one or both of the sub-challenges.