1,012 research outputs found

    Multisensory Motion Perception in 3\u20134 Month-Old Infants

    Get PDF
    Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant\u2019s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction

    Body Perception: Intersensory Origins of Self and Other Perception in Newborns

    Get PDF
    SummarySelf-perception involves integrating changes in visual, tactile, and proprioceptive stimulation from self-motion and discriminating these changes from those of other objects. Recent evidence suggests even newborns discriminate synchronous from asynchronous visual-tactile stimulation to their own body, a foundation for self-perception

    Incidental learning in a multisensory environment across childhood

    Get PDF
    Multisensory information has been shown to modulate attention in infants and facilitate learning in adults, by enhancing the amodal properties of a stimulus. However, it remains unclear whether this translates to learning in a multisensory environment across middle childhood, and particularly in the case of incidental learning. One hundred and eighty-one children aged between 6 and 10 years participated in this study using a novel Multisensory Attention Learning Task (MALT). Participants were asked to respond to the presence of a target stimulus whilst ignoring distractors. Correct target selection resulted in the movement of the target exemplar to either the upper left or right screen quadrant, according to category membership. Category membership was defined either by visual-only, auditory-only or multisensory information. As early as 6 years of age, children demonstrated greater performance on the incidental categorization task following exposure to multisensory audiovisual cues compared to unisensory information. These findings provide important insight into the use of multisensory information in learning, and particularly on incidental category learning. Implications for the deployment of multisensory learning tasks within education across development will be discussed

    Reduced orienting to audiovisual synchrony in infancy predicts autism diagnosis at 3 years of age

    Get PDF
    Background: Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. Methods: We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). Results: Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. Conclusions: This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition. Keywords: Autism spectrum disorder; infancy; multisensory processing; biological motion; biomarker; scientific replication

    Face processing limitation to own species in primates: a comparative study in brown capuchins, Tonkean macaques and humans

    Full text link
    Most primates live in social groups which survival and stability depend on individuals' abilities to create strong social relationships with other group members. The existence of those groups requires to identify individuals and to assign to each of them a social status. Individual recognition can be achieved through vocalizations but also through faces. In humans, an efficient system for the processing of own species faces exists. This specialization is achieved through experience with faces of conspecifics during development and leads to the loss of ability to process faces from other primate species. We hypothesize that a similar mechanism exists in social primates. We investigated face processing in one Old World species (genus Macaca) and in one New World species (genus Cebus). Our results show the same advantage for own species face recognition for all tested subjects. This work suggests in all species tested the existence of a common trait inherited from the primate ancestor: an efficient system to identify individual faces of own species only

    Designing Engaging Learning Experiences in Programming

    Get PDF
    In this paper we describe work to investigate the creation of engaging programming learning experiences. Background research informed the design of four fieldwork studies to explore how programming tasks could be framed to motivate learners. Our empirical findings from these four field studies are summarized here, with a particular focus upon one – Whack a Mole – which compared the use of a physical interface with the use of a screen-based equivalent interface to obtain insights into what made for an engaging learning experience. Emotions reported by two sets of participant undergraduate students were analyzed, identifying the links between the emotions experienced during programming and their origin. Evidence was collected of the very positive emotions experienced by learners programming with a physical interface (Arduino) in comparison with a similar program developed using a screen-based equivalent interface. A follow-up study provided further evidence of the motivation of personalized design of programming tangible physical artefacts. Collating all the evidence led to the design of a set of ‘Learning Dimensions’ which may provide educators with insights to support key design decisions for the creation of engaging programming learning experiences

    Just before I recognize myself: the role of featural and multisensory cues leading up to explicit mirror self-recognition

    Get PDF
    Leading up to explicit mirror self-recognition, infants rely on two crucial sources of information: the continuous integration of sensorimotor and multisensory signals, as when seeing one's movements reflected in the mirror, and the unique facial features associated with the self. While visual appearance and multisensory contingent cues may be two likely candidates of the processes that enable self-recognition, their respective contribution remains poorly understood. In this study, 18-month-old infants saw side-by-side pictures of themselves and a peer, which were systematically and simultaneously touched on the face with a hand. While watching the stimuli, the infant's own face was touched either in synchrony or out of synchrony and their preferential looking behavior was measured. Subsequently, the infants underwent the mirror-test task. We demonstrated that infants who were coded as nonrecognizers at the mirror test spent significantly more time looking at the picture of their own face compared to the other-face, irrespective of whether the multisensory input was synchronous or asynchronous. Our results suggest that right before the onset of mirror self-recognition, featural information about the self might be more relevant in the process of recognizing one's face, compared to multisensory cues

    Behavioural, emotional, and cognitive responses in European disasters: results of survivor interviews

    Get PDF
    In the European multi-centre study BeSeCu (Behaviour, Security, Culture), interviews were conducted in seven countries to explore survivors’ emotional, behavioural, and cognitive responses during disasters. Interviews, either in groups or one-to-one, were convened according to type of event: collapse of a building; earthquake; fire; flood; and terror attack. The content analysis of interviews resulted in a theoretical framework, describing the course of the events, behavioural responses, and the emotional and cognitive processing of survivors. While the environmental cues and the ability to recognise what was happening varied in different disasters, survivors’ responses tended to be more universal across events, and most often were adaptive and non-selfish. Several peri-traumatic factors related to current levels of post-traumatic stress were identified, while memory quantity did not differ as a function of event type or post-traumatic stress. Time since the event had a minor effect on recall. Based on the findings, several suggestions for emergency training are made

    The Importance of Retrieval Failures to Long-term Retention: A Metacognitive Explanation of the Spacing Effect

    Get PDF
    Encoding strategies vary in their duration of effectiveness, and individuals can best identify and modify strategies that yield effects of short duration on the basis of retrieval failures. Multiple study sessions with long inter-session intervals are better than massed training at providing discriminative feedback that identifies encoding strategies of short duration. We report two investigations in which long intervals between study sessions yield substantial benefits to long-term retention, at a cost of only moderately longer individual study sessions. When individuals monitor and control encoding over an extended period, targets yielding the largest number of retrieval failures contribute substantially to the spacing advantage. These findings are relevant to theory and to educators whose primary interest in memory pertains to long-term maintenance of knowledge

    Developmental changes in sensitivity to spatial and temporal properties of sensory integration underlying body representation

    Get PDF
    The closer in time and space that two or more stimuli are presented, the more likely it is that they will be integrated together. A recent study by Hillock-Dunn and Wallace (2012) reported that the size of the visuo-auditory temporal binding window — the interval within which visual and auditory inputs are highly likely to be integrated — narrows over childhood. However, few studies have investigated how sensitivity to temporal and spatial properties of multisensory integration underlying body representation develops in children. This is not only important for sensory processes but has also been argued to underpin social processes such as empathy and imitation (Schütz-Bosbachet al., 2006). We tested 4 to 11 year-olds’ ability to detect a spatial discrepancy between visual and proprioceptive inputs (Experiment One) and a temporal discrepancy between visual and tactile inputs (Experiment Two) for hand representation. The likelihood that children integrated spatially separated visuo-proprioceptive information, and temporally asynchronous visuo-tactile information, decreased significantly with age. This suggests that spatial and temporal rules governing the occurrence of multisensory integration underlying body representation are refined with age in typical developmen
    corecore