77 research outputs found

    Reading during the composition of multi-sentence texts: an eye-movement study

    Get PDF
    Writers composing multi-sentence texts have immediate access to a visual representation of what they have written. Little is known about the detail of writers’ eye movements within this text during production. We describe two experiments in which competent adult writers’ eye-movements were tracked while performing short expository writing tasks. These are contrasted with conditions in which participants read and evaluated researcher-provided texts. Writers spent a mean of around 13% of their time looking back into their text. Initiation of these look-back sequences was strongly predicted by linguistically important boundaries in their ongoing production (e.g., writers were much more likely to look back immediately prior to starting a new sentence). 36% of look-back sequences were associated with sustained reading and the remainder with less patterned forward and backward saccades between words ("hopping"). Fixation and gaze durations and the presence of word-length effects suggested lexical processing of fixated words in both reading and hopping sequences. Word frequency effects were not present when writers read their own text. Findings demonstrate the technical possibility and potential value of examining writers’ fixations within their just-written text. We suggest that these fixations do not serve solely, or even primarily, in monitoring for error, but play an important role in planning ongoing production

    Effects of syntactic context on eye movements during reading

    Get PDF
    Previous research has demonstrated that properties of a currently fixated word and of adjacent words influence eye movement control in reading. In contrast to such local effects, little is known about the global effects on eye movement control, for example global adjustments caused by processing difficulty of previous sentences. In the present study, participants read text passages in which voice (active vs. passive) and sentence structure (embedded vs. non-embedded) were manipulated. These passages were followed by identical target sentences. The results revealed effects of previous sentence structure on gaze durations in the target sentence, implying that syntactic properties of previously read sentences may lead to a global adjustment of eye movement control

    The role of left and right hemispheres in the comprehension of idiomatic language: an electrical neuroimaging study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The specific role of the two cerebral hemispheres in processing idiomatic language is highly debated. While some studies show the involvement of the left inferior frontal gyrus (LIFG), other data support the crucial role of right-hemispheric regions, and particularly of the middle/superior temporal area. Time-course and neural bases of literal vs. idiomatic language processing were compared. Fifteen volunteers silently read 360 idiomatic and literal Italian sentences and decided whether they were semantically related or unrelated to a following target word, while their EEGs were recorded from 128 electrodes. Word length, abstractness and frequency of use, sentence comprehensibility, familiarity and cloze probability were matched across classes.</p> <p>Results</p> <p>Participants responded more quickly to literal than to idiomatic sentences, probably indicating a difference in task difficulty. Occipito/temporal N2 component had a greater amplitude in response to idioms between 250-300 ms. Related swLORETA source reconstruction revealed a difference in the activation of the left fusiform gyrus (FG, BA19) and medial frontal gyri for the contrast idiomatic-minus-literal. Centroparietal N400 was much larger to idiomatic than to literal phrases (360-550 ms). The intra-cortical generators of this effect included the left and right FG, the left cingulate gyrus, the right limbic area, the right MTG (BA21) and the left middle frontal gyrus (BA46). Finally, an anterior late positivity (600-800 ms) was larger to idiomatic than literal phrases. ERPs also showed a larger right centro-parietal N400 to associated than non-associated targets (not differing as a function of sentence type), and a greater right frontal P600 to idiomatic than literal associated targets.</p> <p>Conclusion</p> <p>The data indicate bilateral involvement of both hemispheres in idiom comprehension, including the right MTG after 350 ms and the right medial frontal gyrus in the time windows 270-300 and 500-780 ms. In addition, the activation of left and right limbic regions (400-450 ms) suggests that they have a role in the emotional connotation of colourful idiomatic language. The data support the view that there is direct access to the idiomatic meaning of figurative language, not dependent on the suppression of its literal meaning, for which the LIFG was previously thought to be responsible.</p

    Integrating Mechanisms of Visual Guidance in Naturalistic Language Production

    Get PDF
    Situated language production requires the integration of visual attention and lin-guistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate percep-tual (scene clutter) and conceptual guidance (cue animacy), and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of lan-guage production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of atten-tional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan-pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention

    Syntactic Ambiguity Resolution While Reading in Second and Native Languages

    No full text
    corecore