AVEC 2017

header_AVEC2017

Get started by following the registration instructions.

AVEC 2017 is a satellite workshop of ACM-Multimedia 2017, held in Mountain View California, USA.

Organisers

Fabien Ringeval

Université Grenoble Alpes, France

fabien.ringeval@imag.fr

 

Michel Valstar

University of Nottingham, UK

michel.valstar@nottingham.ac.uk

 

Jonathan Gratch

U. of Southern California, USA

gratch@ict.usc.edu

 

Björn Schuller

Imperial College London/U. Passau,
UK/Germany

schuller@ieee.org

 

Roddy Cowie

Queen’s University Belfast, UK

r.cowie@qub.ac.uk

 

Maja Pantic

Imperial College London/Twente U.,
UK/The Netherlands

m.pantic@imperial.ac.uk

 

Data chairs

Stefan Scherer

U. of Southern California, USA

scherer@ict.usc.edu
Sharon Mozgai

U of Southern California, USA

mozgai@ict.usc.edu
Nicholas Cummins

University of Passau, Germany

nicholas.cummins@uni-passau.de
Maximilian Schmitt

University of Passau, Germany

maximilian.schmitt@uni-passau.de

Scope

The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual, and audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions.

The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and audio-visual affect recognition communities, to compare the relative merits of the three approaches to depression and emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance depression and emotion recognition systems to be able to deal with fully naturalistic behaviours in large volumes of un-segmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.

We are calling for teams to participate in a Challenge of fully-continuous depression and emotion detection from audio, or video, or audio-visual data. As benchmarking database, the DAIC-WOZ corpus of human-agent interactions will be used for the Depression sub-challenge, and the SEWA corpus will be used for the Emotion sub-challenge. Both Depression and Emotion will have to be recognized in terms of continuous time and continuous value.

Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video, and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.

Get started by following the registration instructions.

Sponsors

 logo_aaac-avec2017    audeering-avec2017

logo_sewa-avec2017

Program Committee

Elisabeth Andre, U. Augsburg, Germany

Kevin Bailly, U. Pierre et Marie Curie, France

Rama Chaellapa, U. of Maryland, USA

Mohamed Chetouani, U. Pierre et Marie Curie, France

Eduardo Coutinho, U. of Liverpool, UK

Nicholas Cummins, U. of Passau, Germany

Laurence Devillers, Université Paris-Sud, France

Fernando de la Torre, Carnegie Mellon U., USA

Abhinav Dhall, Indian Institute of Technology Ropar, India

Sidney D’Mello, U. of Notre Dame, USA

Julien Epps, U. of New South Wales, Australia

Anna Esposito, U. of Naples, Italy

Roland Göcke, U. of Canberra, Australia

Julia Hirschberg, Columbia U., USA

Dongmei Jiang, Northwestern Polytechnical U., China

Chi-Chun Lee, National Tsing Hua U., Taiwan

Erik Marchi, Apple inc., UK

Daniel McDuff, Microsoft inc., USA

Marc Méhu, Webster Vienna Private U., Austria

Arianna Mencattini, U. of Rome Tor Vergata, Italy

Emily Mower Provost, U. of Michigan, USA

Shri Narayanan, U. of Southern California, USA

Rosalind Picard, Massachusetts Institute of Technology, USA

Peter Robinson, U. of Cambridge, UK

Ognjen Rudovic, Massachusetts Institute of Technology, USA

Mohammad Soleymani, U. of Geneva, Switzerland

Stefan Steidl, Friedrich-Alexander U. Erlangen-Nuremberg, Germany

Matthew Turk, U. of California, USA

Important Dates

Challenge opening:
May 4, 2017

Paper submission:
July 9, 2017

Notification of acceptance:
August 1, 2017

Camera ready paper:
August 14, 2017

Workshop: T.B.C.

Topics include, but are not limited to: Participation in the Challenge

Audio/Visual/Physiological Emotion Recognition

  • Audio-based Depression/Emotion Recognition
  • Video-based Depression/Emotion Recognition
  • Physiological-based Depression/Emotion Recognition
  • Synchrony of Non-Stationary Time Series
  • Multi-task learning of Multiple Dimensions
  • Weakly Supervised Learning
  • Agglomeration of Learning Data
  • Context in Depression/Emotion Recognition
  • Multiple Rater Ambiguity and Asynchrony

Application

  • Multimedia Coding and Retrieval
Submission PolicyIn submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.
Manuscripts should follow the ACM MM 2017 paper format. Authors should submit papers as a PDF file. Submission will be via easychair.
Papers accepted for the workshop will be allocated 8 pages in the proceedings of ACM MM 2017.
AVEC 2017 reviewing is double blind. Reviewing will be by members of the program committee. Each paper will receive at least three reviews. Acceptance will be based on relevance to the workshop, novelty, technical quality, and performance when the submission concerns a participation in one or both of the sub-challenges.