BIROn - Birkbeck Institutional Research Online

    Action prediction across match-on-action cuts in infancy

    Ildirar kirbas, Sermin and Smith, Tim J. (2017) Action prediction across match-on-action cuts in infancy. In: SCSMI2017 Helsinki, 11-14 Jun 2017, Helsinki, Finland.

    23211a.pdf - Published Version of Record

    Download (2MB) | Preview


    Match-on-action refers to an editing technique where a subject begins an action in one shot and carries it through to completion in the next (Anderson, 1996; Bordwell & Thompson, 2001). The action bridge between shots distracts the viewer from noticing the cut (ie. edit blindness; Smith & Henderson, 2008; Smith & Martin-Portugues Santacreu, 2016) and provides a foundation for the perception of continuity. It is also known that it enables even first-time adult viewers to perceive spatiotemporal continuity between shots who are not able to do so in the absence of continuing action through the cuts (Schwan & Ildirar, 2010; Ildirar& Schwan, 2015). This technique is believed to function by both cuing attentional shifts pre-cut and using motion blur post-cut (Pepperman, 2001) to limit the availability of attention and perceptual discrimination ability of viewers towards the cut (Smith, 2012; Smith & Martin-Portugues Santacreu, 2016). Adults (Flanagan & Johnson, 2003), as well as 12-month-old babies (but not 6-month-olds; Falck-Ytter, Gredebaek & Hofsten, 2006) perform goal-directed, anticipatory eye movements when observing actions performed by others. This anticipatory gaze ability is mediated by a mirror neuron system (MNS; Buccino, 2001; Kohler, 2002) and MNS is only activated when someone is seeing an agent perform actions, not when objects move alone both in adults (Flanagan & Johnson, 2003) and infants (Falck-Ytter et. al.,2008). For anticipating future actions segmenting event into units is critical for both adults and infants. One hypothesis for how event perception progresses is that infants begin with basic, domain-general learning mechanisms that allow them to group actions based on the sequential predictability of the actions they observe (Baldwin & Baird, 2001; Baldwin, Baird, Saylor, & Clark, 2001; Roseberry, Richie, Hirsh-Pasek, Golinkoff, & Shipley, 2011). Infants could use these initial groupings to discover more abstract cues to event structure, such as the actor’s intentions, which are known to play a role in adults’ global event segmentation (e.g., Wilder, 1978; Zacks, 2004; Zacks & Tversky, 2001).Visual sequence learning is a primary mechanism for event segmentation and research show that eight-month-old infants are sensitive to the sequential statistics of actions performed by a human agent (Roseberry et al., 2011). More interestingly, adults (Baldwin, Andersson, Saffran, & Meyer, 2008) as well as infants in their 1st year of life (Stahl, Romberg, Roseberry, Golinkoff & Hirsh‐Pasek, 2014) can segment a continuous action sequence based on sequential predictability alone, which suggest that before infants have top-down knowledge of intentions, they may begin to segment events based on sequential predictability. The study we will present at the conference aims to find out what happens when the observed action is distributed across film cuts. To this purpose we produced two sets of film clips depicting four conditions. In the Single Shot Agent (SSA), an adult sitting in front of a table moved the three objects placed to the one side of the table to the other side shown in one long single shot. In the Multiple Shot Agent (MSA) Condition the action is segmented into taking and putting subactions through multiple film close-ups. In both the Single Shot No Agent (SSNA) and Multiple Shot No Agent (MSNA) conditions the objects move by themselves, one at the time. All film clips end with a long single shot paused when the last object (in agent’s hand or alone) in the middle of its trajectory in order to measure anticipatory saccades by the viewer towards the end point of the object’s trajectory .We defined three areas of interest (AOI): one covering the starting position of the objects (Object AOI), one covering the finishing position of objects (Goal AOI), and one covering the trajectory of the objects (Trajectory AOI). The eye movements of adults (N=20) and 12 month-old infants (N=20) subjects will be measured during each video. The timing of subjects’ saccades from the Object AOI to the Goal AOI will be compared to the arrival of the object. If gaze arrives at the Goal AOI before the object, the trial will be considered predictive. If gaze arrives at the Goal AOI after the object, the trial will be considered reactive. We predict that, based on prior evidence (Falck-Ytter, Gredebaek & Hofsten, 2006), 12 month-old infants and adults will be able to anticipate the end point of a moving object shown in a single long-shot and moved by a human agent but not when the object moves on its own. By comparison, we predict that only adults will anticipate the end point of a an object moved by an agent when it is initially shown via an edited sequence of close-ups. Of critical interest is whether the anticipatory behaviour of infants for the edited sequence is associated with each infant’s daily exposure to moving images (e.g. TV, Film or touchscreen devices) and if such anticipatory behaviour might serve as a future index of visual literacy in pre-verbal children.The results of this study will be discussed with regards to mirror neuron theory of action (Rizzolatti, Fogassiv& Gallese, 2001) and event segmenting theory (Zacks et. al., 2007). This work represents the first steps on understanding how visual literacy emerges in infancy and its parallels with typical cognitive development.


    Item Type: Conference or Workshop Item (Speech)
    School: School of Social Sciences, History and Philosophy > Psychosocial Studies
    Research Centres and Institutes: Brain and Cognitive Development, Centre for (CBCD)
    Depositing User: Sermin Ildirar kirbas
    Date Deposited: 28 Jan 2019 13:20
    Last Modified: 17 Jun 2021 14:38


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item