BIROn - Birkbeck Institutional Research Online

    Transmitting and decoding facial expressions

    Smith, Marie L. and Gosselin, F. and Cottrell, G.W. and Schyns, P.G. (2005) Transmitting and decoding facial expressions. Psychological Science 16 (3), pp. 184-189. ISSN 0956-7976.

    Full text not available from this repository.

    Abstract

    This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures. The ability to accurately interpret facial expressions is of primary importance for humans to socially interact with one another (Nachson, 1995). Facial expressions communicate information from which one can quickly infer the state of mind of one's peers, and adjust one's behavior accordingly. Facial expressions are typically arranged into six universally recognized basic categories (fear, happiness, sadness, disgust, anger, and surprise; Ekman & Friesen, 1975; Izard, 1971) that are similar across different backgrounds and cultures (Ekman & Friesen, 1975; Izard, 1971, 1994). In this article, we examine the basic facial expressions computationally, as signals in a communication channel between an encoding face (the transmitter of expression signals) and a decoding brain (the categorizer of expression signals). We address three main issues: How is facial information encoded to transmit expression signals? How is information decoded to categorize facial expressions? How efficient is the decoding process? From the standpoint of signal encoding, different facial expressions should have minimal overlap in their information: Ideal signals are encoded orthogonally to one another. To understand how the brain encodes facial signals, we relied on a model to benchmark the information transmitted in each of the six basic expressions (plus neutral), and also to quantify how these signals overlap. As a decoder, the brain initially analyzes expression signals impinging on the retina using a number of quasilinear band-pass filters, each preferentially tuned to a spatial frequency band (De Valois & De Valois, 1991). Spatial scales are therefore good candidates as building blocks for understanding the decoding of facial expression information. We applied Bubbles (Gosselin & Schyns, 2001) to estimate how the brain uses spatial-scale information to decode and classify the six basic facial expressions (plus neutral), and also to quantify how the information in these expressions overlaps. From the estimates of transmitted and decoded facial information, we measured the brain's efficiency in decoding the facial expression information that is transmitted.

    Metadata

    Item Type: Article
    School: School of Science > Psychological Sciences
    Depositing User: Sarah Hall
    Date Deposited: 10 Mar 2020 17:00
    Last Modified: 10 Mar 2020 17:00
    URI: https://eprints.bbk.ac.uk/id/eprint/31265

    Statistics

    Downloads
    Activity Overview
    0Downloads
    323Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item