AVEC 2017 menu

Programme

The FERA 2015 workshop will be a full-day event held on Monday 4 May 2015 in room E3 E2, same venue as the main FG 2015 conference. The day will start with a keynote speech, followed by an introduction to the challenge, a series of paper presentations and an overview of the challenge results and an announcement of the winners of the two sub-challenges. The day will be closed by a panel session to discuss the future of AU analysis and facial expression recognition challenges in particular.

———————————————–
9:30-10:30: Keynote (Prof Qiang Ji, Rensselaer Polytechnic Institute, USA) (session chair: Michel Valstar)
———————————————–
10:30 – 11:00 am: Coffee Break

———————————————–
Session 1 – Action Unit Occurrence Detection (session chair: Jeff Cohn):
11:00 – 11:20: FERA 2015 challenge introduction
11:20 – 11:40: Deep Learning based FACS Action Unit Occurrence and Intensity Estimation – Amogh Gudi, Tasli Emrah, Tim Den Uyl, Andreas Maroulis
11:40 – 12:00: Learning to combine local models for Facial Action Unit detection – Shashank Jaiswal, Brais Martinez, Michel Valstar
12:00 – 12:20: Discriminant Multi-Label Manifold Embedding for Facial Action Unit Detection – Anil Yuce, Hua Gao, Jean-Philippe Thiran

———————————————–
12:20 – 13:30 am: Lunch Break
———————————————–
Session 2 – Action Unit Intensity Estimation (session chair: Lijun Yin):
13:30 – 13:50: Facial Action Units Intensity Estimation by the Fusion of Features with Multi-kernel Support Vector Machine – Zuheng Ming, Aurélie Bugeau, Jean-Luc Rouas, Takaaki Shochi
13:50 – 14:10: Cross-dataset learning and person-specific normalisation for automatic Action Unit detection – Tadas Baltrusaitis, Marwa Mahmoud, Peter Robinson
14:10 – 14:30: Facial Action Unit Intensity Prediction via Hard Multi-Task Metric Learning for Kernel Regression – Jeremie Nicolle, Kevin Bailly, Mohamed Chetouani
14:30 – 14:45: Challenge results

———————————————–
14:45 – 15:30 Panel Session
———————————————–

  • Fernando de la Torre
  • Qiang Ji
  • Matthew Turk

Keynote: Exploiting Facial Muscular Movement Dependencies for Robust Facial Action Recognition

Prof Qiang Ji

Abstract:

Facial action (AU) unit recognition is concerned with recognizing the local facial motions from image or video. Facial action recognition is challenging, in particular for recognizing spontaneous facial actions due to subtle facial motions, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of the underlying anatomic facial structure as well as the need to produce a meaningful and natural facial expression, facial muscles often move in a synchronized and coordinated manner. Recognizing this fact, we propose to develop probabilistic AU models to systematically capture the spatiotemporal dependences among the facial muscles and leverage the captured facial action relations for robust and accurate facial action recognition. In this talk, I will discuss our recent work in this area. Our research in this area can be divided into data-based AU models and knowledge-based AU models. For data-based AU models, I will first discuss our earlier work that involves using Dynamic Bayesian Network to capture the local spatial temporal relationships among AUs. I will then discuss our recent work that uses Restricted Boltzmann Machine (RBM) to capture the high order and global relationships among AUs.

The data-based AU models depend on the training data. Their performance suffers if the training data is lacking in either quality or quantity. More importantly, the data-based AU models cannot generalize well beyond the data that are used to train them. To overcome this limitation, we introduce the knowledge-based AU model, where we propose to learn the AU relationships exclusively from generic facial anatomic knowledge that governs AU behaviors, without any training data. For knowledge-based AU models, I will discuss the related anatomic knowledge as well as our methods to capture such knowledge and to encode it into the AU models. Finally, I will discuss the experimental evaluations of our AU models on benchmark datasets and their performance against state of the art methods.

Biography

Qiang Ji received his Ph.D degree in Electrical Engineering from the University of Washington. He is currently a Professor with the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute (RPI). From 2009 to 2010, he served as a program director at the National Science Foundation (NSF), where he managed NSF’s computer vision and machine learning programs. He also held teaching and research positions with the Beckman Institute at University of Illinois at Urbana-Champaign, the Robotics Institute at Carnegie Mellon University, the Dept. of Computer Science at University of Nevada at Reno, and the US Air Force Research Laboratory. Prof. Ji currently serves as the director of the Intelligent Systems Laboratory (ISL) at RPI.

Prof. Ji’s research interests are in computer vision, probabilistic graphical models, pattern recognition, and their applications in various fields. He has published over 200 papers in peer-reviewed journals and conferences, and he has received multiple awards for his work. His research has been supported by major governmental agencies including NSF, NIH, DARPA, ONR, ARO, and AFOSR as well as by major companies. Prof. Ji is an editor on several related IEEE and international journals and he has served as a general chair, program chair, technical area chair, and program committee member in numerous international conferences/workshops. Prof. Ji is a fellow of the IEEE and the IAPR.