Recognizing Lower Face Action Units in Facial Expression

Yingli Tian, Takeo Kanade and Jeffrey F. Cohn

abstract

Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g. happiness and anger). Such prototypic expressions, however, occur infrequently, and human emotions and intentions are communicated more often by changes in one or two discrete features. In this paper, we develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly front viewed face image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on the multi-state models, detailed descriptions of the facial features are detected and tracked including mouth, eye, brow, cheek, and related wrinkles without artificial enhancement. For lower face features, a mid-level feature representation motivated by action units in Facial Action Coding System (FACS) is developed using nine feature parameters. With these features as the inputs, the individual action units or action unit combinations are recognized by a neural network algorithm. A high recognition rate of 96.71% is obtained. The recognition results show that our system can identify action units regardless of whether they occurred singly or in combinations.