Programme

Thursday 24th of March 2011

Anacapa/Santa Cruz
17:00 – 22:00 Secondary test

Friday 25th of March 2011

8:45 – 9:00 Opening

9:00 – 9:45 Keynote 1: Jeff Cohn: Facial Expression Recognition or Emotion Understanding?

(see below for abstract and bio)

9:45 – 10:00 Coffee break

10:00 – 12:30 Oral Session 1 (6 papers)

Session Chair: Javier Movellan (UCSD)

  1. FERA: Emotion Recognition from an Ensemble of Features
    Tariq, Zhou, Lin, Li, Wang, Le, Huang, Han, and Lv
  2. Emotion Recognition by Two View SVM_2K Classifier on Dynamic Facial Expression Features
    Meng, Romera-Paredes, and Berthouze
  3. A CERT based approach to the FERA emotion challenge
    Littlewort, Bartlett, Whitehill, Wu, Butko, Ruvulo, and Movellan
  4. Emotion Recognition Using Shape and Appearance Features
    Dhall, Asthana, Goecke, and Gedeon
  5. Emotion recognition using dynamic grid-based HoG features
    Mohamed and Meunier
  6. Facial Expression Recognition using Emotion Avatar Image
    Yang and Bhanu

12:30 – 14:00 Lunch break

14:00 – 14:45 Keynote 2: Rana el Kaliouby & Rosalind Picard: The Story of Affectiva, and Challenges for Online Expression Recognition

(see below for abstract and bio)

14:45 – 15:00 Coffee break

15:00 – 17:05 Oral session 2 (4 papers)

Session Chair: Marian Bartlett (UCSD)

  1. Combining LGBP Histograms with AAM coefficients in the Multi-Kernel SVM framework to detect Facial Action Units
    Senechal, Rapp, Salam, Seguier, Bailly, and Prevost
  2. Subject-Independent Facial Expression Detection using Constrained Local Models
    Chew, Lucey, Lucey, Saragih, Cohn, and Sridaharan
  3. Between-dataset AU Recognition Transfer
    Butko, Movellan, Wu, Ruvolo, Whitehill, and Bartlett
  4. Real-time inference of mental states from facial expressions and upper body gestures
    McDuff, Baltrusaitis, Mahmoud, Banda, Robinson, El Kaliouby, and Picard

17:05 – 17:30 Meta-analysis of challenge + announcement of winners

17:30 – 18:30 Panel discussion (with, among others;  Matthew Turk, Jeff Cohn, Javier Movellan, Marcello Mortillaro and Maja Pantic)

Keynotes:

Jeff Cohn: Facial Expression Recognition or Emotion Understanding?

Automated facial expression recognition and understanding has made tremendous gains since a US National Science Foundation Workshop first proposed the task in 1992.  The first international conference (FG), now completing its ninth iteration, followed only three years later.  Special issues of major journals and a new IEEE journal on affective computing have emerged more recently. The FERA Grand Challenge represents another milestone.  For the first time, investigators have tested their algorithms on a common holdout test set.

The Grand Challenge was inspired by the success of similar challenges in the allied field of face recognition.  Challenges in face recognition, beginning with FERET have contributed to exponential improvements in face recognition over the past 20 years. Face recognition and facial expression analysis share commonalities that include history, technical challenges, methods, and many of the same investigators.

Beyond the similarities, face and facial expression recognition differ in important ways.  Face identity is in principle objective, whereas facial expression is inherently subjective.  Face identity is a near-sufficient measure of person identity.  Facial expression is but an inferential measure of emotion.   Facial expressions may have varied meanings. To understand a person’s intentions or emotions or to understand human social dynamics more broadly, we need to consider multiple aspects of facial expression in combination with those from other modalities.  Context and timing information in particular appear to be critical to emotion understanding.  This view has important implications for database construction and future challenges.

Bio

Jeffrey Cohn is Professor of Psychology at the University of Pittsburgh and Adjunct Faculty at the Robotics Institute, Carnegie Mellon University. He received his PhD in psychology from the University of Massachusetts at Amherst. Dr. Cohn has led interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis of facial expression and prosody and applied those tools to research in human emotion and social dynamics, social development, and psychopathology. He is co-developer of the publically released Cohn-Kanade, MultiPIE, and Pain Archive databases for automatic facial image analysis. He is Associate Editor of IEEE Transactions in Affective Computing, co-edited two recent special issues of Image and Vision Computing, and was Co-Chair of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008).

Rana el Kaliouby & Rosalind Picard: The Story of Affectiva, and Challenges for Online Expression Recognition

When Rana and Roz met, they discovered their shared interest in developing emotion recognition technology to help people on the autism spectrum. They sought NSF funding, were rejected, and tried again until NSF generously provided resources to work together on this research. Over time they were approached by many people interested in use of this technology for commercial purposes, and finally, when the requests became overwhelming, they decided to spin out a company to meet the growing needs.

Affectiva is now nearly two years old, selling sensors and professional services, and beginning to launch computer vision services for online facial expression recognition. The first such webcam-based service is online at Forbes (try it at http://www.forbes.com/2011/02/28/detect-smile-webcam-affectiva-mit-media-lab.html). It collected 2000 spontaneous facial expression videos in two weeks while presenting participants with their smile intensity plots. In the talk we’ll share several practical challenges for computer vision researchers in this new and growing business, as well as discuss ways to build a successful company that does great good, not only meeting the emotion communication needs of customers and users, but also helping our research community advance the state of the art in expression understanding. We’d like to get the audience’s input on the latter — including ways Affectiva can share data and co-develop tools. We will solicit your ideas during a Q&A period.

Bios

Rana el Kaliouby is co-founder and Chief Technology Officer of Affectiva, Inc where she oversees the company’s product and technology roadmap around measuring and communication emotion. She is also Research Scientist at the MIT Media Laboratory where she leads research on facial expression recognition and co-founder of the MIT Autism and Communication Technology Initiative. The New York Times rated her research developing the first in the world wearable social-emotional prosthesis, as one of the top 100 innovations of 2006, and her facial expression recognition work has been featured in Reuters, Wired, The Boston Globe and more. El Kaliouby is the 2006 recipient of the Global Women and Inventors & Innovators Network award. She holds a BSc and MSc in computer science from the American University in Cairo and a Ph.D. from the computer laboratory, University of Cambridge.

Rosalind Picard is founder and director of the Affective Computing Research Group and co-founder and director of the Autism and Communication Technology Initiative at the MIT Media Laboratory. She is also co-founder, chief scientist, and chairman of Affectiva, Inc. After receiving a Bachelors in Electrical Engineering from Georgia Tech with highest honors, Picard joined AT&T Bell Labs where she collaborated on the design of the DSP16 chip and developed new algorithms and architectures for image compression. Picard earned masters and doctorate degrees in Electrical Engineering and Computer Science from MIT and joined the MIT faculty in 1991. At MIT she authored the book Affective Computing in 1997, laying the foundation for a new field of research giving computers skills of emotional intelligence. Picard leads a team of researchers at MIT focused on creating technology to recognize, interpret, and respond intelligently to emotion in multiple modalities including physiology, face, posture and task behavior.