Official sponsor

aaac-AVEC2015

Official sponsor

audeering-AVEC2016

AVEC 2017 menu

Call For Participation

header_AVEC2015_date_known

AVEC 2015 is a satellite workshop of ACM-Multimedia 2015, 26 October, Brisbane, Australia

Paper submission is now closed.

The draft baseline paper is now available for download!

Organisers

Fabien Ringeval        

TUM, Germany

fabien.ringeval@tum.de

 

Björn Schuller

ICL/U Passau,
UK/Germany

schuller@ieee.org

 

Michel Valstar

University of Nottingham, UK

pszmv@nottingham.ac.uk

 

Roddy Cowie        

Queen’s University Belfast, UK

r.cowie@qub.ac.uk

 

Maja Pantic        

Imperial College London, UK

m.pantic@imperial.ac.uk

Scope

The Audio/Visual Emotion Challenge and Workshop (AV+EC 2015) “Bridging Across Audio, Video and Physio” will be the fifth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and – for the first time also – physiological emotion analysis, with all participants competing under strictly the same conditions.

The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion systems to be able to deal with fully naturalistic behaviours in large volumes of unsegmented, non-prototypical and non-preselected data, as this is exactly the type of data that both multimedia and human-machine/human-robot communication interfaces have to face in the real world.

We are calling for teams to participate in a Challenge of fully-continuous emotion detection from audio, or video, or physiological data, or any combination of these three modalities. As benchmarking database the RECOLA multimodal corpus of remote and collaborative affective interactions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in two dimensions: arousal and valence.

Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio, video and physiological processing of emotive data, and the issues concerning combined audio-visual-physiological emotion recognition.

Please visit our website http://sspnet.eu/avec2015 for more information.

Program Committee

Felix Burkhardt, Deutsche Telekom, Germany

Rama Chellappa, University of Maryland, USA

Fang Chen, NICTA, Australia

Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), France

Jeffrey Cohn, University of Pittsburgh, USA

Laurence Devillers, Université Paris-Sud, France

Julien Epps, University of New South Wales, Australia

Anna Esposito, University of Naples, Italy

Roland Goecke, University of Canberra, Australia

Jarek Krajewski, Universität Wuppertal, Germany

Marc Mehu, Webster Vienna Private University, Austria

Louis-Philippe Morency, University of Southern California, USA

Richard Morriss, University of Nottingham, UK

Stefan Scherer, University of Southern California, USA

Stefan Steidl, Uinversität Erlangen-Nuremberg, Germany

Jianhua Tao, Chinese Academy of Sciences, China

Matthew Turk, University of California, USA

Stefanos Zafeiriou, Imperial College London, UK

Important Dates

Paper submission: 15 July, 2015

Notification of acceptance: 7 August, 2015

Final results: 12 August, 2014

Workshop registration: 16 August, 2014

Camera ready paper (hard deadline): 19 August, 2014

Workshop: 26 October, 2015

Topics include, but are not limited to:

Participation in the ChallengeAudio/Visual/Physiological Emotion Recognition

  • Audio-based Emotion Recognition
  • Video-based Emotion Recognition
  • Physiology-based Emotion Recognition
  • Synchrony of Non-Stationary Time Series
  • Multi-task learning of Multiple Dimensions
  • Weakly Supervised Learning
  • Agglomeration of Learning Data
  • Context in Emotion Recognition
  • Multiple Rater Ambiguity and Asynchrony

Application

  • Multimedia Coding and Retrieval
  • Usability of Audio/Visual/Physiological Emotion Recognition
  • Real-time Issues
Submission Policy

In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.
Manuscripts should follow the ACM MM 2015 paper format. Authors should submit papers as a PDF file. Submission will be via easychair.
Papers accepted for the workshop will be allocated 8 pages in the proceedings of ACM MM 2015.
AV+EC 2015 reviewing is double blind. Reviewing will be by members of the program committee. Each paper will receive at least three reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.