1,937 research outputs found

    Third-person self-talk facilitates emotion regulation without engaging cognitive control: Converging evidence from ERP and fMRI.

    Get PDF
    Does silently talking to yourself in the third-person constitute a relatively effortless form of self control? We hypothesized that it does under the premise that third-person self-talk leads people to think about the self similar to how they think about others, which provides them with the psychological distance needed to facilitate self control. We tested this prediction by asking participants to reflect on feelings elicited by viewing aversive images (Study 1) and recalling negative autobiographical memories (Study 2) using either I or their name while measuring neural activity via ERPs (Study 1) and fMRI (Study 2). Study 1 demonstrated that third-person self-talk reduced an ERP marker of self-referential emotional reactivity (i.e., late positive potential) within the first second of viewing aversive images without enhancing an ERP marker of cognitive control (i.e., stimulus preceding negativity). Conceptually replicating these results, Study 2 demonstrated that third-person self-talk was linked with reduced levels of activation in an a priori defined fMRI marker of self-referential processing (i.e., medial prefrontal cortex) when participants reflected on negative memories without eliciting increased levels of activity in a priori defined fMRI markers of cognitive control. Together, these results suggest that third-person self-talk may constitute a relatively effortless form of self-control

    The brightness clustering transform and locally contrasting keypoints

    No full text
    In recent years a new wave of feature descriptors has been presented to the computer vision community, ORB, BRISK and FREAK amongst others. These new descriptors allow reduced time and memory consumption on the processing and storage stages of tasks such as image matching or visual odometry, enabling real time applications. The problem is now the lack of fast interest point detectors with good repeatability to use with these new descriptors. We present a new blob- detector which can be implemented in real time and is faster than most of the currently used feature-detectors. The detection is achieved with an innovative non-deterministic low-level operator called the Brightness Clustering Transform (BCT). The BCT can be thought as a coarse-to- fine search through scale spaces for the true derivative of the image; it also mimics trans-saccadic perception of human vision. We call the new algorithm Locally Contrasting Keypoints detector or LOCKY. Showing good repeatability and robustness to image transformations included in the Oxford dataset, LOCKY is amongst the fastest affine-covariant feature detectors

    Prioritized Selection in Visual Search through Onset Capture and Color Inhibition: Evidence from a Probe-Dot Detection Task.

    Get PDF
    Observers performed a preview search task in which, on some trials, they had to indicate the presence of a briefly presented probe-dot. Probes could be presented on locations corresponding to old or new elements and prior to or after the presentation of the new elements. After the presentation of the new elements, probes were generally detected faster on new than on old locations, indicating prioritized selection of new elements. Prior to the presentation of the new elements, probes were detected faster on new than on old locations only when old and new elements differed in color. These results suggest that prioritized selection of new elements is mediated not by visual marking but by onset capture. Additionally, observers may apply color-based inhibition. Copyright 2005 by the American Psychological Association

    Influence of hand position on the near-effect in 3D attention

    Get PDF
    Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ‘near-effect’) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, always near the viewer in previous studies investigating endogenous attentional shifts in real depth. In Experiment 1, the response-hand was placed at either the near or far target depth in a depth cueing task. Placing the response-hand at the far target depth abolished the near-effect, but failed to bias spatial attention to the far location. Experiment 2 showed that the response-hand effect was not modulated by the presence of an additional passive hand, whereas Experiment 3 confirmed that attentional prioritization of the passive hand was not masked by the influence of the responding hand on spatial attention in Experiment 2. The pattern of results is most consistent with the idea that response preparation can modulate spatial attention within a 3D viewer-centred spatial representation

    Top-down control is not lost in the attentional blink: evidence from intact endogenous cuing.

    Get PDF
    The attentional blink (AB) refers to the finding that performance on the second of two targets (T1 and T2) is impaired when the targets are presented at a target onset asynchrony (TOA) of less than 500 ms. One account of the AB assumes that the processing load of T1 leads to a loss of top-down control over stimulus selection. The present study tested this account by examining whether an endogenous spatial cue that indicates the location of a following T2 can facilitate T2 report even when the cue and T2 occur within the time window of the AB. Results from three experiments showed that endogenous cuing had a significant effect on T2 report, both during and outside of the AB; this cuing effect was modulated by both the cue-target onset asynchrony and by cue validity, while it was invariant to the AB. These results suggest that top-down control over target selection is not lost during the AB. © 2007 Springer-Verlag

    Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets.

    Get PDF
    this article should be addressed to Peter A. Starreveld, Department of Cognitive Psychology, Vrije Universiteit, Van der Boechorststraat 1, 1081 BT Amsterdam, the Netherlands. E-mail: [email protected] Journal of Experimental Psychology: Copyright 2004 by the American Psychological Association, Inc. Human Perception and Performance 2004, Vol. 30, No. 1, 56 --78 0096-1523/04/$12.00 DOI: 10.1037/0096-1523.30.1.56 56 As discussed previously, flat slopes of search functions are interpreted as evidence showing that distractor elements in the corresponding experiments were only preattentively processed. Because identification of a display element involves attentive processing, two-stage theories of visual search predict that the identities of distractors should not affect the search time for a target in any search task in which flat search slopes are obtained. In the present study, this prediction was put to the tes

    Visual distraction in cytopathology: should we be concerned?

    Get PDF
    Visual distraction in cytopathology has not been previously investigated as a source of diagnostic error, presumably because the viewing field of a conventional light microscope is considered large enough to minimise interference from peripheral visual stimuli. Virtual microscopy, which involves the examination of digitised images of pathology specimens on computer screens, is beginning to challenge the central role of light microscopy as a diagnostic tool in cytopathology. The relatively narrow visual angle offered by virtual microscopy makes it conceivable that users of these systems are more vulnerable to visual interference. Using a variant of a visual distraction paradigm (the Eriksen flanker task), the aim of the study was to determine whether the accuracy and speed of interpreting cells on a central target screen is affected by images of cells and text displayed on neighbouring monitors under realistic reading room conditions

    Competition between auditory and visual spatial cues during visual task performance

    Get PDF
    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture. Participants preformed an orthogonal cueing task, in which, the visual target was preceded by both a peripheral visual and auditory cue. When both cues were presented at chance level, visual and auditory capture was observed. However, when the validity of the visual cue was increased to 80% only visual capture and no auditory capture was observed. Furthermore, a highly predictive (80% valid) auditory cue was not able to prevent visual capture. These results demonstrate that crossmodal auditory capture does not occur when a competing predictive visual event is presented and is therefore not a fully automatic process
    corecore