104 research outputs found
Here today, gone tomorrow - adaptation to change in memory-guided visual search
Visual search for a target object can be facilitated by the repeated presentation of an invariant configuration of nontargets ('contextual cueing'). Here, we tested adaptation of learned contextual associations after a sudden, but permanent, relocation of the target. After an initial learning phase targets were relocated within their invariant contexts and repeatedly presented at new locations, before they returned to the initial locations. Contextual cueing for relocated targets was neither observed after numerous presentations nor after insertion of an overnight break. Further experiments investigated whether learning of additional, previously unseen context-target configurations is comparable to adaptation of existing contextual associations to change. In contrast to the lack of adaptation to changed target locations, contextual cueing developed for additional invariant configurations under identical training conditions. Moreover, across all experiments, presenting relocated targets or additional contexts did not interfere with contextual cueing of initially learned invariant configurations. Overall, the adaptation of contextual memory to changed target locations was severely constrained and unsuccessful in comparison to learning of an additional set of contexts, which suggests that contextual cueing facilitates search for only one repeated target location
The role of unique color changes and singletons in attention capture
Previous studies have shown that a sudden color change is typically less salient in capturing attention than the onset of a new object. Von Mühlenen, Rempel, and Enns (Psychological Science 16: 979-986, 2005) showed that a color change can capture attention as effectively as the onset of a new object given that it occurs during a period of temporal calm, where no other display changes happen. The current study presents a series of experiments that further investigate the conditions under which a change in color captures attention, by disentangling the change signal from the onset of a singleton. The results show that the item changing color receives attentional priority irrespective of whether this change goes along with the appearance of a singleton or not
Object integration requires attention: visual search for Kanizsa figures in parietal extinction
The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration
Figural Completion in Visual Search
Die Integration von Teilfragmenten zu einem kohärenten und zusammenhängenden Objekt stellt einen wesentlichen Aspekt der visuellen Informationsverarbeitung dar. In der vorliegenden, kumulativen Dissertation wird diese Frage der Objektintegration untersucht indem Figurbildungsmechanismen und deren Rolle für die visuelle Suche analysiert werden. In einer Reihe von Experimenten mit virtuellen Figurkonfigurationen (‚Kanizsa’ Figuren) konnte gezeigt werden, dass die Sucheffizienz von der integrierten Formrepräsentation einzelner Objektfragmente abhängt. Die Steuerung von Suchprozessen lässt sich demnach auf Basis eines Extraktionsmechanismus verstehen, der vorhandene visuelle Informationen zu salienten Regionen gruppiert bzw. segmentiert und auf formsensitive Verarbeitungsprozesse in okzipito-parietalen Arealen zurückgreift. Komplementär zu diesen Mechanismen der figuralen Segmentation konnte zudem in einer weiteren Experimentalserie gezeigt werden, dass auch aufgabenrelevante Charakteristika auf gruppierungsspezifische Suchprozesse einwirken können. Somit sind Figurbildungsprozesse in der visuellen Suche sowohl durch die Berechnung einer salienten Region als auch durch entsprechende Anforderungen der Aufgabe modifizierbar und erklärbar
Changes in attentional breadth scale with the demands of Kanizsa-figure object completion–evidence from pupillometry
The effect of task-irrelevant objects in spatial contextual cueing
During visual search, the spatial configuration of the stimuli can be learned when the same displays are presented repeatedly, thereby guiding attention more efficiently to the target location (contextual cueing effect). This study investigated how the presence of a task-irrelevant object influences the contextual cueing effect. Experiment 1 used a standard T/L search task with “old” display configurations presented repeatedly among “new” displays. A green-filled square appeared at unoccupied locations within the search display. The results showed that the typical contextual cueing effect was strongly reduced when a square was added to the display. In Experiment 2, the contextual cueing effect was reinstated by simply including trials where the square could appear at an occupied location (i.e., underneath the search stimuli). Experiment 3 replicated the previous experiment, showing that the restored contextual cueing effect did not depend on whether the square was actually overlapping with a stimulus or not. The final two experiments introduced a display change in the last epoch. The results showed that the square does not only hinder the acquisition of contextual information but also its manifestation. These findings are discussed in terms of an account where effective contextual learning depends on whether the square is perceived as part of the search display or as part of the display background
Event-related potentials reveal increased distraction by salient global objects in older adults
Mission impossible? Spatial context relearning following a target relocation event depends on cue predictiveness
When experience with scenes foils attentional orienting: ERP evidence against flexible target-context mapping in visual search
Visual search is speeded when a target is repeatedly presented in an invariant scene context of nontargets (contextual cueing), demonstrating observers' capability for using statistical long-term memory (LTM) to make predictions about upcoming sensory events, thus improving attentional orienting. In the current study, we investigated whether expectations arising from individual, learned environmental structures can encompass multiple target locations. We recorded event-related potentials (ERPs) while participants performed a contextual cueing search task with repeated and non-repeated spatial item configurations. Notably, a given search display could be associated with either a single target location (standard contextual cueing) or two possible target locations. Our result showed that LTM-guided attention was always limited to only one target position in single- but also in the dual-target displays, as evidenced by expedited reaction times (RTs) and enhanced N1pc and N2pc deflections contralateral to one (“dominant”) target of up to two repeating target locations. This contrasts with the processing of non-learned (“minor”) target positions (in dual-target displays), which revealed slowed RTs alongside an initial N1pc “misguidance” signal that then vanished in the subsequent N2pc. This RT slowing was accompanied by enhanced N200 and N400 waveforms over fronto-central electrodes, suggesting that control mechanisms regulate the competition between dominant and minor targets. Our study thus reveals a dissociation in processing dominant versus minor targets: While LTM templates guide attention to dominant targets, minor targets necessitate control processes to overcome the automatic bias towards previously learned, dominant target locations
The contrasting impact of global and local object attributes on Kanizsa figure detection
- …
