AVEC 2017 menu

CK+

The Cohn-Kanade AU-Coded Facial Expression Database is for research in automatic facial image analysis and synthesis and for perceptual studies. Cohn-Kanade is available in two versions and a third is in preparation.

Version 1 (the original or initial release (Kanade, Cohn, & Tian, 2000)) includes 486 sequences from 97 posers. Each sequence begins with a neutral expression and proceeds to a peak expression. The peak expression for each sequence is fully FACS (Ekman, Friesen, & Hager, 2002; Ekman & Friesen, 1979) coded and given an emotion label. The emotion label refers to what expression was requested rather than what may actually have been performed. For validated emotion labels, please use version 2, CK+, as described below.

Version 2, referred to as CK+, includes both posed and non-posed (spontaneous) expressions and additional types of metadata. For posed expressions, the number of sequences is increased from the initial release by 22% and the number of subjects by 27%. As with the initial release, the target expression for each sequence is fully FACS coded. In addition validated emotion labels have been added to the metadata. Thus, sequences may be analyzed for both action units and prototypic emotions. The non-posed expressions are from Ambadar, Cohn, & Reed (2009). Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition. Tracking results for shape and appearance are via the approach of Matthews & Baker (2004). For action unit and expression recognition, a linear support vector machine (SVM) classifier with leave-one-out subject cross-validation was used. Both sets of results are included with the metadata. For a full description of CK+, please see P. Lucey et al. (2010).

Version 3 is planned for spring 2011. The original data collection of Cohn-Kanade included synchronized frontal and 30-degree from frontal video (fig. 1, below). Version 3 will add the synchronized 30-degree from frontal video.

To receive the database for research, non-commercial use, download, sign, and return an Agreement to the Affect Analysis Group.

  • year: 2010
  • month: Jun
  • edition: 2
  • url: http://vasc.ri.cmu.edu/idb/html/face/facial_expression/
  • main_author: Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade , Jason Saragih, Zara Ambadar
  • license: Academic research use
  • subjects: 123
  • recordings: 593
  • duration: 1 second per clip
  • naturality: scripted
  • media: Video
  • language: None
  • interaction: None (single person portraying expressions on demand)
  • annotations: frame based FACS AU activation for all videos

Categories: face-analysis; psychological-modelling

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>