48 research outputs found
To dash or to dawdle: verb-associated speed of motion influences eye movements during spoken sentence comprehension
In describing motion events verbs of manner provide information about the speed of agents or objects in those events. We used eye tracking to investigate how inferences about this verb-associated speed of motion would influence the time course of attention to a visual scene that matched an event described in language. Eye movements were recorded as participants heard spoken sentences with verbs that implied a fast (“dash”) or slow (“dawdle”) movement of an agent towards a goal. These sentences were heard whilst participants concurrently looked at scenes depicting the agent and a path which led to the goal object. Our results indicate a mapping of events onto the visual scene consistent with participants mentally simulating the movement of the agent along the path towards the goal: when the verb implies a slow manner of motion, participants look more often and longer along the path to the goal; when the verb implies a fast manner of motion, participants tend to look earlier at the goal and less on the path. These results reveal that event comprehension in the presence of a visual world involves establishing and dynamically updating the locations of entities in response to linguistic descriptions of events
Why Um Helps Auditory Word Recognition: The Temporal Delay Hypothesis
Several studies suggest that speech understanding can sometimes benefit from the presence of filled pauses (uh, um, and the like), and that words following such filled pauses are recognised more quickly. Three experiments examined whether this is because filled pauses serve to delay the onset of upcoming words and these delays facilitate auditory word recognition, or whether the fillers themselves serve to signal upcoming delays in a way which informs listeners' reactions. Participants viewed pairs of images on a computer screen, and followed recorded instructions to press buttons corresponding to either an easy (unmanipulated, with a high-frequency name) or a difficult (visually blurred, low-frequency) image. In all three experiments, participants were faster to respond to easy images. In 50% of trials in each experiment, the name of the image was directly preceded by a delay; in the remaining trials an equivalent delay was included earlier in the instruction. Participants were quicker to respond when a name was directly preceded by a delay, regardless of whether this delay was filled with a spoken um, was silent, or contained an artificial tone. This effect did not interact with the effect of image difficulty, nor did it change over the course of each experiment. Taken together, our consistent finding that delays of any kind help word recognition indicates that natural delays such as fillers need not be seen as ‘signals’ to explain the benefits they have to listeners' ability to recognise and respond to the words which follow them
Overt is no better than covert when rehearsing visuo-spatial information in working memory
In the present study, we examined whether eye movements facilitate retention of visuo-spatial information in working memory. In two experiments, participants memorised the sequence of the spatial locations of six digits across a retention interval. In some conditions, participants were free to move their eyes during the retention interval, but in others they either were required to remain fixated or were instructed to move their eyes exclusively to a selection of the memorised locations. Memory performance was no better when participants were free to move their eyes during the memory interval than when they fixated a single location. Furthermore, the results demonstrated a primacy effect in the eye movement behaviour that corresponded with the memory performance. We conclude that overt eye movements do not provide a benefit over covert attention for rehearsing visuo-spatial information in working memory
Cue predictability changes scaling in eye-movement fluctuations
Recent research has provided evidence for scaling-relations in eye-movement fluctuations, but not much is known about what these scaling relations imply about cognition or eye-movement control. Generally, scaling relations in behavioral and neurophysiological data have been interpreted as an indicator for the coordination of neurophysiological and cognitive processes. In this study, we investigated the effect of predictability in timing and gaze-direction on eye-movement fluctuations. Participants performed a simple eye-movement task, in which a visual cue prompted their gaze to different locations on a spatial layout, and the predictability about temporal and directional aspects of the cue were manipulated. The results showed that scaling exponents in eye-movements decreased with predictability and were related to the participants' perceived effort during the task. In relation to past research, these findings suggest that scaling exponents reflect a relative demand for voluntary control during task performance
Facilitating Understanding of Movements in Dynamic Visualizations: an Embodied Perspective
Models of high-dimensional semantic space predict language-mediated eye movements in the visual world
In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813–839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics
