BIROn - Birkbeck Institutional Research Online

    What you see depends on what you hear: temporal averaging and crossmodal integration

    Chen, L. and Zhou, X. and Muller, Hermann and Shi, Z. (2018) What you see depends on what you hear: temporal averaging and crossmodal integration. Journal of Experimental Psychology: General 147 (12), pp. 1851-1864. ISSN 0096-3445.

    [img]
    Preview
    Text
    What you see depends on what you hear- temporal averaging and crossmodal integration.pdf - First Submitted (AKA Pre-print)

    Download (854kB) | Preview

    Abstract

    In our multisensory world, we often rely more on auditory information than on visual input for temporal processing. One typical demonstration of this is that the rate of auditory flutter assimilates the rate of concurrent visual flicker. To date, however, this auditory dominance effect has largely been studied using regular auditory rhythms. It thus remains unclear whether irregular rhythms would have a similar impact on visual temporal processing, what information is extracted from the auditory sequence that comes to influence visual timing, and how the auditory and visual temporal rates are integrated together in quantitative terms. We investigated these questions by assessing, and modeling, the influence of a task-irrelevant auditory sequence on the type of ``Ternus apparent motion'': group motion versus element motion. The type of motion seen critically depends on the time interval between the two Ternus display frames. We found that an irrelevant auditory sequence preceding the Ternus display modulates the visual interval, making observers perceive either more group motion or more element motion. This biasing effect manifests whether the auditory sequence is regular or irregular, and it is based on a summary statistic extracted from the sequential intervals: their geometric mean. However, the audiovisual interaction depends on the discrepancy between the mean auditory and visual intervals: if it becomes too large, no interaction occurs-which can be quantitatively described by a partial Bayesian integration model. Overall, our findings reveal a cross-modal perceptual averaging principle that may underlie complex audiovisual interactions in many everyday dynamic situations.

    Metadata

    Item Type: Article
    Additional Information: This is the version of the article first submitted to the journal. It has not been peer reviewed.
    School: Birkbeck Schools and Departments > School of Science > Psychological Sciences
    Depositing User: Hermann Muller
    Date Deposited: 21 May 2019 12:41
    Last Modified: 01 Aug 2019 23:03
    URI: http://eprints.bbk.ac.uk/id/eprint/27059

    Statistics

    Downloads
    Activity Overview
    109Downloads
    58Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item