Official sponsor

aaac-AVEC2015

Official sponsor

audeering-AVEC2016

AVEC 2017 menu

Programme

The workshop will be a full-day event held on 26 October in the P6 room (Plaza level), same venue as the main ACM Multimedia conference. The day will start with a keynote speech, followed by an introduction to the challenge, a series of paper presentations, an overview of the challenge results and an announcement of the winner of AV+EC 2015, followed by a panel discussion.

———————————————–
Session 0 – Keynote
Chair: Fabien Ringeval

9:00 – 10:00: AVEC’15 Keynote Talk – From Facial Expression Analysis to Multimodal Mood Analysis, Roland Göcke – University of Canberra, Australia

———————————————–
Session 1 – Introduction
Chair: Roland Göcke

10:00 – 10:30: AV+EC 2015 Challenge Introduction, Fabien Ringeval, Björn Schuller, Michel Valstar, Roddy Cowie and Maja Pantic – University of Passau, Germany, Imperial College London, UK, University of Nottingham, UK, Queen’s University Belfast, UK.

———————————————–
10:30 – 11:00: Coffee Break
———————————————–

Session 2: AV+EC 2015 Part 1
Chair: Michel Valstar

11:00 – 11:25: Ensemble Methods for Continuous Affect Recognition: Multi-modality, Temporality, and Challenges, Markus Kächele, Patrick Thiam, Günther Palm, Friedhelm Schwenker and Martin Schels – University of Ulm, Germany.
11:25 – 11:50: ETS System for AV+EC 2015 Challenge, Patrick Cardinal, Najim Dehak, Alessandro L. Koerich, Jahangir Alam and Patrice Boucher – École de technologie supérieure, Canada, Centre de recherche informatique de Montréal (CRIM), Canada, Massachusetts Institute of Technology (MIT), USA.
11:50 – 12:15: Vocal Emotion Recognition with Log-Gabor Filters, Yu Gu, Eric Postma and Haixiang Lin – Tilburg center for Cognition and Communication, University of Tilburg, Netherlands, Delft University of Technology, University of Delft, The Netherlands.
12:15 – 12:40: Multimodal Affective Analysis combining Regularized Linear Regression and Boosted Regression Trees, Aleksandar Milchevski, Alessandro Rozza and Dimitar Taskovski – Faculty of Electrical Engineering and Information Technologies, Republic of Macedonia, HYERA Software, Italy.

———————————————–
12:40 – 13:45: Lunch Break
———————————————–

Session 3: AV+EC 2015 Part 2
Chair: Fabien Ringeval

13:45 – 14:10: An Investigation of Annotation Delay Compensation and Output-Associative Fusion for Multimodal Continuous Emotion Prediction, Zhaocheng Huang, Ting Dang, Nicholas Cummins, Brian Stasak, Phu Le, Vidhyasaharan Sethu and Julien Epps – The University of New South Wales, Australia.
14:10 – 14:35: Multi-modal Dimensional Emotion Recognition using Recurrent Neural Network, Shizhe Chen and Qin Jin – Renmin University of China, China.
14:35 – 15:00: Exploring the Importance of Individual Differences to the Automatic Estimation of Emotions Induced by Music, Hesam Sagha, Eduardo Coutinho and Björn Schuller – University of Passau, Germany, Imperial College London, UK.

———————————————–
15:00 – 15:30: Coffee Break
———————————————–

Session 4: AV+EC 2015 Part 3
Chair: Fabien Ringeval

15:30 – 15:55: Long Short Term Memory Recurrent Neural Network based Multimodal Dimensional Emotion Recognition, Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li and Zhengqi Wen – National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Sciences, China.
15:55 – 16:20: Multimodal Affective Dimension Prediction Using Deep Bidirectional Long Short-Term Memory Recurrent Neural Networks, Lang He, Dongmei Jiang, Le Yang, Peng Wu, Ercheng Pei and Hichem Sahli – Northwestern Polytechnical University, China, Vrije Universiteit Brussel, Belgium.
16:20 – 16:30: Challenge results.

Session 5: Panel Session
Chair: Michel Valstar, Hatice Gunes
Panel members: Hayley Hung, Alan Hanjalic, and Julien Epps

16:30 – 17:00: Panel Session.

Keynote: From Facial Expression Analysis to Multimodal Mood Analysis

Pr. Roland Göcke

Abstract

In this talk, I will give an overview of our research into developing multimodal technology that analyses the affective state and more broadly behaviour of humans. Such technology is useful for a number of applications, with applications in healthcare, e.g. mental health disorders, being a particular focus for us. Depression and other mood disorders are common and disabling disorders. Their impact on individuals and families is profound. The WHO Global Burden of Disease reports quantify depression as the leading cause of disability worldwide. Despite the high prevalence, current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. There currently exist no laboratory-based measures of illness expression, course and recovery, and no objective markers of end-points for interventions in both clinical and research settings. Using a multimodal analysis of facial expressions and movements, body posture, head movements as well as vocal expressions, we are developing affective sensing technology that supports clinicians in the diagnosis and monitoring of treatment progress. Encouraging results from a recently completed pilot study demonstrate that this approach can achieve over 90% agreement with clinical assessment. After more than eight years of research, I will also talk about the lessons learnt in this project, such as measuring spontaneous expressions of affect, subtle expressions, and affect intensity using multimodal approaches. We are currently extending this line of research to other disorders such as anxiety, post-traumatic stress disorder, dementia and autism spectrum disorders. In particular for the latter, a natural progression is to analyse dyadic and group social interactions. At the core of our research is a focus on robust approaches that can work in real-world environments.

Biography

Roland Goecke is Professor of Affective Computing at the Faculty of Education, Science, Technology and Engineering, University of Canberra, Australia. He is the Head of the Vision and Sensing Group and the Director of the Human-Centred Technology Research Centre. He received his Masters degree in Computer Science from the University of Rostock, Germany, in 1998 and his PhD in Computer Science from the Australian National University, Canberra, Australia, in 2004. Before joining UC in December 2008, Prof Goecke worked as a Senior Research Scientist with Seeing Machines, as a Researcher at the NICTA Canberra Research Labs, and as a Research Fellow at the Fraunhofer Institutes for Computer Graphics, Germany. His research interests are in affective computing, pattern recognition, computer vision, human-computer interaction, multimodal signal processing and e-research. Prof Goecke has been an author and co-author of more than 120 peer-reviewed publications. His research has been funded by grants from the Australian Research Council (ARC), the National Health and Medical Research Council (NHMRC), the National Science Foundation (NSF, USA), the Australian National Data Service (ANDS) and the National eResearch Collaboration Tools and Resources project (NeCTAR).