Final Report To NSF of the Planning Workshop on Facial Expression Understanding

This session surveys the different sources of information in the face and the different types of information that can be derived. Neural efferent pathways include the brain areas transmitting to the facial nerve, the facial nerve to facial nucleus, and the facial nucleus to muscles. The relationship between electromyographic (EMG) measurement, muscle tonus measurements, and visible observable facial activity and methods is considered. Evidence on emotion signals includes universals, development, spontaneous versus deliberate actions, and masked emotions. The face also provides conversational signals and signs relevant to cognitive activity. The logic of comprehensively measuring facial movement illustrates how FACS scores facial behavior, the mechanics of facial movement, and options for what to score (intensity, timing, symmetry). Relationships between facial behavior, voice, and physiological measures are discussed. A database of the face and support for implementing this resource are needed. Presenters: J. T. Cacioppo, P. Ekman, W. V. Friesen, J. C. Hager, C. E. Izard Facial Signal Systems The face is the site for the major sensory inputs and the major communicative outputs. It is a multisignal, multimessage response system capable of tremendous flexibility and specificity (Ekman, 1979; Ekman & Friesen, 1975). This system conveys information via four general classes of signals or sign vehicles: (1) static facial signals represent relatively permanent features of the face, such as the bony structure and soft tissues masses, that contribute to an individual's appearance; (2) slow facial signals represent changes in the appearance of the face that occur gradually over time, such as the development of permanent wrinkles and changes in skin texture; (3) artificial signals represent exogenously determined features of the face, such as eyeglasses and cosmetics; and (4) rapid facial signals represent phasic changes in neuromuscular activity that may lead to visually detectable changes in facial appearance. (See Ekman, 1978, for discussion of these four signal systems and eighteen different messages that can be derived from these signals). All four classes of signals contribute to facial recognition. We are concerned here, however, with rapid signals. These movements of the facial muscles pull the skin, temporarily distorting the shape of the eyes, brows, and lips, and the appearance of folds, furrows and bulges in different patches of skin. These changes in facial muscular activity typically are brief, lasting a few seconds; rarely do they endure more than five seconds or less than 250 ms. The most useful terminology for describing or measuring facial actions refers to the production system -the activity of specific muscles. These muscles may be designated by their Latin names, or a numeric system for Action Units (AUs), as is used in Ekman and Friesen's Facial Action Coding System (FACS, see page 10). A coarser level of description involves terms such as smile, smirk, frown, sneer, etc. which are imprecise, ignoring differences between a variety of different muscular actions to which they may refer, and mixing description with inferences about meaning or the message which they may convey. Among the types of messages conveyed by rapid facial signals are: (1) emotions -including happiness, sadness, anger, disgust, surprise, and fear; (2) emblems -culture-specific symbolic communicators such as the wink; (3) manipulators -self-manipulative associated movements such as lip biting; (4) illustrators -actions accompanying and highlighting speech such as a raised brow; and (5) regulators -nonverbal conversational mediators such as nods or smiles (Ekman & Friesen, 1969). A further distinction can be drawn among rapid facial actions that reflect: (1) reflex actions under the control of afferent input; (2) rudimentary reflex-like or impulsive actions accompanying emotion and less differentiated information processing (e.g., the orienting or defense response) that appear to be controlled by innate motor programs; (3) adaptable, versatile, and more culturally variable spontaneous actions that appear to be mediated by learned motor programs; and (4) malleable voluntary actions. Thus, some classes of rapid facial actions are relatively undemanding of a person's limited information processing capacity, free of deliberate control for their evocation, and associated with (though not necessary for) rudimentary emotional and symbolic processing, whereas others are demanding of processing capacity, are under voluntary control, and are governed by complex and culturally specific prescriptions, or display rules (Ekman & Friesen, 1969), for facial communications. (The terms facial "actions," "movements," and "expressions" are used interchangeably throughout this report). Techniques for Measuring the Rapid Facial Signals Numerous methods exist for measuring facial movements resulting from the action of muscles (see a review of 14 such techniques in Ekman, 1982; also Hager, 1985 for a comparison of the two most commonly used, FACS and MAX). The Facial Action Coding System (FACS) (Ekman and Friesen, 1978) is the most comprehensive, widely used, and versatile system. Because it is being used by most of the participants in the Workshop who currently are working with facial movement, and is referred to many times in the rest of this report, more detail will be given here about its derivation and use than about other techniques. Later, the section on neuroanatomy of facial movement (page 12) discusses electromyography (EMG), which can measure activity that might not be visible, and, therefore, is not a social signal. The Facial Action Coding System (FACS) FACS was developed by determining how the contraction of each facial muscle (singly and in combination with other muscles) changes the appearance of the face. Videotapes of more than 5000 different combinations of muscular actions were examined to determine the specific changes in appearance which occurred and how to best differentiate one from another. It was not possible to reliably distinguish which specific muscle had acted to produce the lowering of the eyebrow and the drawing of the eyebrows together, and therefore the three muscles involved in these changes in appearance were combined into one specific Action Unit (AU). Likewise, the muscles involved in opening the lips have also been combined. Measurement with FACS is done in terms of Action Units rather than muscular units for two reasons. First, for a few changes in appearance, more than one muscle has been combined into a single AU, as described above. Second, FACS separates into two AUs the activity of the frontalis muscle, because the inner and outer portion of this muscle can act independently, producing different changes in appearance. There are 46 AUs which account for changes in facial expression, and 12 AUs which more grossly describe changes in gaze direction and head orientation. Coders spend approximately 100 hours learning FACS. Self instructional materials teach the anatomy of facial activity, i.e., how muscles singly and in combination change the appearance of the face. Prior to using FACS, all learners are required to score a videotaped test (provided by Ekman), to insure they are measuring facial behavior in agreement with prior learners. To date, more than 300 people have achieved high inter-coder agreement on this test. A FACS coder "dissects" an observed expression, decomposing it into the specific AUs which produced the movement. The coder repeatedly views records of behavior in slowed and stopped motion to determine which AU or combination of AUs best account for the observed changes. The scores for a facial expression consist of the list of AUs which produced it. The precise duration of each action also is determined, and the intensity of each muscular action and any bilateral asymmetry is rated. In the most elaborate use of FACS, the coder determines the onset (first evidence) of each AU, when the action reaches an apex (asymptote), the end of the apex period when it begins to decline, and when it disappears from the face completely (offset). These time measurements are usually much more costly to obtain than the decision about which AU(s) produced the movement, and in most research only onset and offset have been measured. The FACS scoring units are descriptive, involving no inferences about emotions. For example, the scores for a upper face expression might be that the inner corners of the eyebrows are pulled up (AU 1) and together (AU 4), rather than that the eyebrows' position shows sadness. Data analyses can be done on these purely descriptive AU scores, or FACS scores can be converted by a computer using a dictionary and rules into emotion scores. Although this emotion interpretation dictionary was originally based on theory, there is now considerable empirical support for the facial action patterns listed in it: FACS scores yield highly accurate preand postdictions of the emotions signaled to observers in more than fifteen cultures, Western and nonWestern, literate and preliterate (Ekman, 1989); specific AU scores show moderate to high correlations with subjective reports by the expresser about the quality and intensity of the felt emotion (e.g., Davidson et al., 1990); experimental circumstances are associated with specific facial expressions (Ekman, 1984); different and specific patterns of physiological activity co-occur with specific facial expressions (Davidson et al. 1990). The emotion prediction dictionary provides scores on the frequency of the seven single emotions (anger, fear, disgust, sadness, happiness, contempt, and surprise), the co-occurrence of two or more of these emotions in blends, and a distinction between emotional and nonemotional smiling, which is based on whether or not the muscle that orbits the eye (AU 6) is present with the muscle that pulls the lip corners up obliquely (AU 1

[1]  C. Darwin The Expression of the Emotions in Man and Animals , .

[2]  G. H. Monrad‐Krohn ON THE DISSOCIATION OF VOLUNTARY AND EMOTIONAL INNERVATION IN FACIAL PARESIS OF CENTRAL ORIGIN , 1924 .

[3]  W. H. Sumby,et al.  Visual contribution to speech intelligibility in noise , 1954 .

[4]  J. Schachter Pain, Fear, and Anger in Hypertensives and Normotensives: A Psychophysiological Study , 1957, Psychosomatic medicine.

[5]  H. Brooks,et al.  Medical physiology , 1961 .

[6]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[7]  C. G. Fisher,et al.  Confusions among visually perceived consonants. , 1968, Journal of speech and hearing research.

[8]  Makoto Nagao,et al.  Line extraction and pattern detection in a photograph , 1969, Pattern Recognit..

[9]  F. Plum,et al.  Hyperphagia, rage, and dementia accompanying a ventromedial hypothalamic neoplasm. , 1969, Archives of neurology.

[10]  P. Ekman,et al.  The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding , 1969 .

[11]  F. Mcguigan Covert oral behavior during the silent performance of language tasks. , 1970 .

[12]  Peter M. Will,et al.  Grid Coding: A Preprocessing Technique for Robot and Machine Vision , 1971, IJCAI.

[13]  L. D. Harmon,et al.  Identification of human faces , 1971 .

[14]  D. B. Bender,et al.  Visual properties of neurons in inferotemporal cortex of the Macaque. , 1972, Journal of neurophysiology.

[15]  P. Ekman Universals and cultural differences in facial expressions of emotion. , 1972 .

[16]  Y. Kaya,et al.  A BASIC STUDY ON HUMAN FACE RECOGNITION , 1972 .

[17]  Frederick I. Parke,et al.  Computer generated animation of faces , 1972, ACM Annual Conference.

[18]  P. Ekman,et al.  Emotion in the Human Face: Guidelines for Research and an Integration of Findings , 1972 .

[19]  T. Sakai,et al.  Computer analysis and classification of photographs of human faces , 1973 .

[20]  Herman Chernoff,et al.  The Use of Faces to Represent Points in k- Dimensional Space Graphically , 1973 .

[21]  Martin A. Fischler,et al.  The Representation and Matching of Pictorial Structures , 1973, IEEE Transactions on Computers.

[22]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[23]  D. Barash Human Ethology , 1973 .

[24]  金出 武雄,et al.  Picture processing system by computer complex and recognition of human faces , 1974 .

[25]  Frederic I. Parke,et al.  A parametric model for human faces. , 1974 .

[26]  Mark Lee Gillenson,et al.  The interactive generation of facial images on a crt using a heuristic strategy. , 1974 .

[27]  P. Ekman,et al.  Unmasking the face : a guide to recognizing emotions from facial clues , 1975 .

[28]  Lawrence A. Fried,et al.  Anatomy of the head, neck, face, and jaws , 1976 .

[29]  S. Carey,et al.  From piecemeal to configurational representation of faces. , 1977, Science.

[30]  L. Camras,et al.  Facial expressions used by children in a conflict situation. , 1977, Child development.

[31]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[32]  Judith A. Hall Gender Effects in Decoding Nonverbal Cues , 1978 .

[33]  Giacomo Rizzolatti,et al.  Neurons responding to visual stimuli in the frontal lobe of macaque monkeys , 1979, Neuroscience Letters.

[34]  Takeo Kanade,et al.  Computer recognition of human faces , 1980 .

[35]  William B. Thompson,et al.  TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2009 .

[36]  R. Buck,et al.  Nonverbal Communication of Affect in Brain-Damaged Patients , 1980, Cortex.

[37]  J. O'Rourke,et al.  Model-based image analysis of human motion using constraint propagation , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[38]  J. Lanzetta,et al.  Vicarious instigation and conditioning of facial expressive and autonomic responses to a model's expressive display of pain. , 1980, Journal of personality and social psychology.

[39]  J. Cacioppo,et al.  Electromyographic specificity during covert information processing. , 1981, Psychophysiology.

[40]  B. Milner,et al.  Performance of complex arm and facial movements after focal brain lesions , 1981, Neuropsychologia.

[41]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[42]  Norman I. Badler,et al.  Animating facial expressions , 1981, SIGGRAPH '81.

[43]  P. Ekman,et al.  Handbook of methods in nonverbal behavior research , 1982 .

[44]  G. V. Van Hoesen,et al.  Prosopagnosia , 1982, Neurology.

[45]  Parke,et al.  Parameterized Models for Facial Animation , 1982, IEEE Computer Graphics and Applications.

[46]  P. Ekman,et al.  Autonomic nervous system activity distinguishes among emotions. , 1983, Science.

[47]  W. Eric L. Grimson,et al.  An implementation of a computational theory of visual surface interpolation , 1983, Comput. Vis. Graph. Image Process..

[48]  A. Montgomery,et al.  Physical characteristics of the lips underlying vowel lipreading performance. , 1983, The Journal of the Acoustical Society of America.

[49]  A. J. Mistlin,et al.  Neurones responsive to faces in the temporal cortex: studies of functional organization, sensitivity to identity and relation to perception. , 1984, Human neurobiology.

[50]  W. Rinn,et al.  The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. , 1984, Psychological bulletin.

[51]  P. Ekman Expression and the Nature of Emotion , 1984 .

[52]  E. Rolls Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces. , 1984, Human neurobiology.

[53]  Eric David Petajan,et al.  Automatic Lipreading to Enhance Speech Recognition (Speech Reading) , 1984 .

[54]  Stephen Michael Platt A structural model of the human face (graphics, animation, object representation) , 1985 .

[55]  D Psaltis,et al.  Optical implementation of the Hopfield model. , 1985, Applied optics.

[56]  Joseph C. Hager,et al.  A comparison of units for visually measuring facial actions , 1985 .

[57]  E. Rolls,et al.  Selectivity between faces in the responses of a population of neurons in the cortex in the superior temporal sulcus of the monkey , 1985, Brain Research.

[58]  Bruce Bowe The Face of Emotion , 1985 .

[59]  D Psaltis,et al.  Optical information processing based on an associative-memory model of neural nets with thresholding and feedback. , 1985, Optics letters.

[60]  P. Ekman,et al.  The asymmetry of facial actions is inconsistent with models of hemispheric specialization. , 1985, Psychophysiology.

[61]  J. Cacioppo,et al.  Semantic, evaluative, and self-referent processing: memory, cognitive effort, and somatovisceral activity. , 1985, Psychophysiology.

[62]  B. Englis,et al.  Emotional reactions to a political leader's expressive displays. , 1985 .

[63]  E. Strauss,et al.  Cerebral organization of affect suggested by temporal lobe seizures , 1985, Neurology.

[64]  D. Z. Anderson,et al.  Coherent optical eigenstate memory. , 1986, Optics letters.

[65]  P. Ekman,et al.  A new pan-cultural facial expression of emotion , 1986 .

[66]  H C Lee,et al.  Method for computing the scene-illuminant chromaticity from specular highlights. , 1986, Journal of the Optical Society of America. A, Optics and image science.

[67]  Brian Wyvill,et al.  Speech and expression: a computer solution to face animation , 1986 .

[68]  W. Larrabee,et al.  A finite element model of skin deformation. II. An experimental model of skin deformation , 1986, The Laryngoscope.

[69]  J. Borod,et al.  Facial expression of positive and negative emotions in patients with unipolar depression. , 1986, Journal of affective disorders.

[70]  I. Florin,et al.  Expression of emotion in asthmatic children and their mothers. , 1986, Journal of psychosomatic research.

[71]  J. P. Lewis,et al.  Automated lip-synch and speech synthesis for character animation , 1986, CHI '87.

[72]  S. Nishida Speech recognition enhancement by lip information , 1986, CHI '86.

[73]  Joseph S. Perkell,et al.  Coarticulation strategies: preliminary implications of a detailed analysis of lower lip protrusion movements , 1986, Speech Commun..

[74]  Demetri Psaltis,et al.  Optical Neural Computers , 1987, Topical Meeting on Optical Computing.

[75]  Masaaki Oka,et al.  Real-time manipulation of texture-mapped surfaces , 1987, SIGGRAPH '87.

[76]  Ian Craw,et al.  Automatic extraction of face-features , 1987, Pattern Recognit. Lett..

[77]  J. Russell,et al.  Relativity in the Perception of Emotion in Facial Expressions , 1987 .

[78]  K Wagner,et al.  Multilayer optical learning networks. , 1987, Applied optics.

[79]  Ishwar K. Sethi,et al.  Finding Trajectories of Feature Points in a Monocular Image Sequence , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[80]  J. Cacioppo,et al.  Waveform moment analysis in psychophysiological research. , 1987, Psychological bulletin.

[81]  S. Horton Reduction of Disruptive Mealtime Behavior by Facial Screening , 1987, Behavior modification.

[82]  M. Hasselmo,et al.  The responses of neurons in the cortex in the superior temporal sulcus of the monkey to band-pass spatial frequency filtered faces , 1987, Vision Research.

[83]  Terrence J. Sejnowski,et al.  Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..

[84]  G. J. Dunning,et al.  Holographic associative memory with nonlinearities in the correlation domain. , 1987, Applied optics.

[85]  Alice J. O'Toole,et al.  A physical system approach to recognition memory for spatially transformed faces , 1988, Neural Networks.

[86]  E. Petajan,et al.  An improved automatic lipreading system to enhance speech recognition , 1988, CHI '88.

[87]  P. Yeh,et al.  Optical interconnection using photorefractive dynamic holograms. , 1988, Applied optics.

[88]  Yukio Kobayashi,et al.  Method Of Detecting Face Direction Using Image Processing For Human Interface , 1988, Other Conferences.

[89]  Max Mintz,et al.  Robust fusion of location information , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[90]  A. Damasio,et al.  Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity , 1988, Neurology.

[91]  Jake K. Aggarwal,et al.  On the computation of motion from sequences of images-A review , 1988, Proc. IEEE.

[92]  Peter J. Burt,et al.  Smart sensing within a pyramid vision machine , 1988, Proc. IEEE.

[93]  P. Ekman,et al.  Smiles when lying. , 1988, Journal of personality and social psychology.

[94]  D. Brady,et al.  Adaptive optical networks using photorefractive crystals. , 1988, Applied optics.

[95]  Thomas S. Huang,et al.  A survey of construction and manipulation of octrees , 1988, Comput. Vis. Graph. Image Process..

[96]  Keith Waters The computer synthesis of expressive three-dimensional facial character animation , 1988 .

[97]  P. Ekman The argument and evidence about universals in facial expressions of emotion. , 1989 .

[98]  K. Prkachin,et al.  Pain expression in patients with shoulder pathology: validity, properties and relationship to sickness impact , 1989, Pain.

[99]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[100]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[101]  Carver Mead,et al.  Analog VLSI and neural systems , 1989 .

[102]  Steven Donald Pieper,et al.  More than skin deep : physical modeling of facial tissue , 1989 .

[103]  R. Krause,et al.  Facial expression of schizophrenic patients and their interaction partners. , 1989, Psychiatry.

[104]  D. Kriegman,et al.  On recognizing and positioning curved 3D objects from image contours , 1989, [1989] Proceedings. Workshop on Interpretation of 3D Scenes.

[105]  Alex Pentland,et al.  Automatic lipreading by optical-flow analysis , 1989 .

[106]  Clea T. Waite,et al.  The facial action control editor, face : a parametric facial expression editor for computer generated animation , 1989 .

[107]  Kiyoharu Aizawa,et al.  Model-based analysis synthesis image coding (MBASIC) system for a person's face , 1989, Signal Process. Image Commun..

[108]  Geoffrey E. Hinton,et al.  Phoneme recognition using time-delay neural networks , 1989, IEEE Trans. Acoust. Speech Signal Process..

[109]  C. von der Malsburg,et al.  Distortion invariant object recognition by matching hierarchically labeled graphs , 1989, International 1989 Joint Conference on Neural Networks.

[110]  J. Cacioppo,et al.  A psychometric study of surface electrode placements for facial electromyographic recording: I. The brow and cheek muscle regions. , 1989, Psychophysiology.

[111]  B.P. Yuhas,et al.  Integration of acoustic and visual speech signals using neural networks , 1989, IEEE Communications Magazine.

[112]  Murray Alpert,et al.  Perceiver and poser asymmetries in processing facial emotion , 1990, Brain and Cognition.

[113]  Demetri Terzopoulos,et al.  Analysis of facial images using physical and anatomical models , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[114]  Demetri Terzopoulos,et al.  Physically-based facial modelling, analysis, and animation , 1990, Comput. Animat. Virtual Worlds.

[115]  P. Ekman,et al.  Voluntary facial action generates emotion-specific autonomic nervous system activity. , 1990, Psychophysiology.

[116]  Demetri Terzopoulos,et al.  A physical model of facial tissue and muscle articulation , 1990, [1990] Proceedings of the First Conference on Visualization in Biomedical Computing.

[117]  Eric S. Maniloff,et al.  Dynamic holographic interconnects using static holograms , 1990, Annual Meeting Optical Society of America.

[118]  A. J. Fridlund,et al.  The Skeletomotor System , 1990 .

[119]  John A. Stern,et al.  The ocular system. , 1990 .

[120]  Yasuhiko Watanabe,et al.  Real-time head motion detection system , 1990, Other Conferences.

[121]  D. Psaltis,et al.  Holography in artificial neural networks , 1990, Nature.

[122]  J. Cacioppo,et al.  Principles of psychophysiology : physical, social, and inferential elements , 1990 .

[123]  Kevin W. Bowyer,et al.  Computing the orthographic projection aspect graph of solids of revolution , 1990, Pattern Recognit. Lett..

[124]  T. J. Drabik,et al.  Silicon VLSI/ferroelectric liquid crystal technology for micropower optoelectronic computing devices. , 1990, Applied optics.

[125]  WilliamsLance Performance-driven facial animation , 1990 .

[126]  Ronald S. Van Gelder,et al.  FACIAL EXPRESSION AND SPEECH: NEUROANATOMICAL CONSIDERATIONS , 1990 .

[127]  Alex Pentland,et al.  Face Processing: Models For Recognition , 1990, Other Conferences.

[128]  Venu Govindaraju,et al.  A computational model for face location , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[129]  R. Krause,et al.  Interaction regulations used by schizophrenic and psychosomatic patients: studies on facial behavior in dyadic interactions. , 1990, Psychiatry.

[130]  Terrence J. Sejnowski,et al.  SEXNET: A Neural Network Identifies Sex From Human Faces , 1990, NIPS.

[131]  J. B. Waite,et al.  Head boundary location using snakes , 1990 .

[132]  Hirohisa Yaguchi,et al.  Facial pattern detection and color correction from negative color film , 1990 .

[133]  P. Ekman,et al.  Type A behavior pattern: facial behavior and speech components. , 1990, Psychosomatic medicine.

[134]  T. Poggio A theory of how the brain might work. , 1990, Cold Spring Harbor symposia on quantitative biology.

[135]  J. A. Russell,et al.  The contempt expression and the relativity thesis , 1991 .

[136]  C. Mead,et al.  The silicon retina. , 1991, Scientific American.

[137]  Peter W. Hallinan Recognizing human eyes , 1991, Optics & Photonics.

[138]  A. J. O'toole,et al.  Classifying faces by face and sex using an autoassociative memory trained for recognition , 1991 .

[139]  A. Yuille Deformable Templates for Face Recognition , 1991, Journal of Cognitive Neuroscience.

[140]  Demetri Terzopoulos,et al.  Modelling and animating faces using scanned data , 1991, Comput. Animat. Virtual Worlds.

[141]  Masanobu Yamamoto,et al.  Human motion analysis based on a robot arm model , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[142]  E G Paek,et al.  Simplified holographic associative memory using enhanced nonlinear processing with a thermoplastic plate. , 1991, Optics letters.

[143]  Andreas G. Andreou,et al.  Current-mode subthreshold MOS circuits for analog VLSI neural systems , 1991, IEEE Trans. Neural Networks.

[144]  Thomas K. Pilgram,et al.  Facial surface scanner , 1991, IEEE Computer Graphics and Applications.

[145]  Thomas S. Huang,et al.  An Integrated Approach to 3D Motion Analysis and Object Recognition , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[146]  Karen A. Frenkel,et al.  The human genome project and informatics , 1991, CACM.

[147]  P. Ekman,et al.  Contradictions in the study of contempt: What's it all about? Reply to Russell , 1991 .

[148]  Alex Pentland,et al.  Interactive-time vision: face recognition as a visual behavior , 1991 .

[149]  J. Russell Negative results on a reported facial expression of contempt , 1991 .

[150]  Leslie G. Ungerleider,et al.  Dissociation of object and spatial visual processing pathways in human extrastriate cortex. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[151]  H. Harashima,et al.  Analysis and synthesis of facial expressions in knowledge-based coding of facial image sequences , 1991, [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing.

[152]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[153]  A. J. Fridlund Evolution and facial action in reflex, social motive, and paralanguage , 1991, Biological Psychology.

[154]  Mark H. Johnson,et al.  Biology and Cognitive Development: The Case of Face Recognition , 1993 .

[155]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[156]  R. Desimone Face-Selective Cells in the Temporal Cortex of Monkeys , 1991, Journal of Cognitive Neuroscience.

[157]  Kiyoharu Aizawa,et al.  Human facial motion modeling, analysis, and synthesis for video compression , 1991, Other Conferences.

[158]  J. Russell Rejoinder to Ekman, O'Sullivan, and Matsumoto , 1991 .

[159]  D Terzopoulos,et al.  The computer synthesis of expressive faces. , 1992, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[160]  Joachim M. Buhmann,et al.  Object recognition with Gabor functions in the dynamic link architecture , 1992 .

[161]  Steven D. Pieper,et al.  CAPS: computer-aided plastic surgery , 1992 .

[162]  Paul Ekman,et al.  Facial Expressions of Emotion: New Findings, New Questions , 1992 .

[163]  A. Cowey,et al.  The role of the 'face-cell' area in the discrimination and recognition of faces by monkeys. , 1992, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[164]  C. Pelachaud Communication and coarticulation in facial animation , 1992 .

[165]  Gregory J. Wolff,et al.  Neural network lipreading system for improved speech recognition , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[166]  Junji Yamato,et al.  Recognizing human action in time-sequential images using hidden Markov model , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[167]  H. Oster,et al.  Adult Judgments and Fine-Grained Analysis of Infant Facial Expressions: Testing the Validity of A Priori Coding Formulas. , 1992 .

[168]  Michael I. Jordan,et al.  Forward Models: Supervised Learning with a Distal Teacher , 1992, Cogn. Sci..

[169]  Yasuhiko Watanabe,et al.  A trigonal prism-based method for hair image generation , 1992, IEEE Computer Graphics and Applications.

[170]  M. Young,et al.  Sparse population coding of faces in the inferotemporal cortex. , 1992, Science.

[171]  Peter Stucki,et al.  Database Requirements for Multimedia Applications , 1992 .

[172]  J. Sergent,et al.  Functional neuroanatomy of face and object processing. A positron emission tomography study. , 1992, Brain : a journal of neurology.

[173]  P. Ekman An argument for basic emotions , 1992 .

[174]  F. Fogelman Soulie,et al.  Multiresolution scene segmentation using MLPs , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[175]  M. Alibali,et al.  Transitions in concept acquisition: using the hand to read the mind. , 1993, Psychological review.

[176]  Tobias Delbrück,et al.  Investigations of analog VLSI visual transduction and motion processing , 1993 .

[177]  D Psaltis,et al.  Optical network for real-time face recognition. , 1993, Applied optics.

[178]  P. Ekman,et al.  Voluntary Smiling Changes Regional Brain Activity , 1993 .

[179]  Alice J. O'Toole,et al.  Low-dimensional representation of faces in higher dimensions of the face space , 1993 .