173 research outputs found
What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues
For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes
Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery
The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p<0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework. © 2012 Biomedical Engineering Society.link_to_subscribed_fulltex
Negative emotional stimuli reduce contextual cueing but not response times in inefficient search
In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search
“Avoiding or approaching eyes”? Introversion/extraversion affects the gaze-cueing effect
We investigated whether the extra-/introversion personality dimension can influence processing of others’ eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli’s eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals
Gaze following in multiagent contexts: Evidence for a quorum-like principle
Research shows that humans spontaneously follow another individual’s gaze. However, little remains known on how they respond when multiple gaze cues diverge across members of a social group. To address this question, we presented participants with displays depicting three (Experiment 1) or five (Experiment 2) agents showing diverging social cues. In a three-person group, one individual looking at the target (33% of the group) was sufficient to elicit gaze-facilitated target responses. With a five-person group, however, three individuals looking at the target (60% of the group) were necessary to produce the same effect. Gaze following in small groups therefore appears to be based on a quorum-like principle, whereby the critical level of social information needed for gaze following is determined by a proportion of consistent social cues scaled as a function of group size. As group size grows, greater agreement is needed to evoke joint attention
Atypical processing of gaze cues and faces explains comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD)
This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects of ASD, ADHD and their interaction on the EDAN, an ERP marker of orienting visual attention towards a spatially cued location and the N170, a right-hemisphere lateralised ERP linked to face processing. We identified atypical gaze cue and face processing in children with ASD and ADHD+ASD compared with the ADHD and CTRL groups. The findings indicate a neurobiological basis for the presence of comorbid ASD symptoms in ADHD. Further research using larger samples is needed
What Affects Social Attention? Social Presence, Eye Contact and Autistic Traits
Social understanding is facilitated by effectively attending to other people and the subtle social cues they generate. In order to more fully appreciate the nature of social attention and what drives people to attend to social aspects of the world, one must investigate the factors that influence social attention. This is especially important when attempting to create models of disordered social attention, e.g. a model of social attention in autism. Here we analysed participants' viewing behaviour during one-to-one social interactions with an experimenter. Interactions were conducted either live or via video (social presence manipulation). The participant was asked and then required to answer questions. Experimenter eye-contact was either direct or averted. Additionally, the influence of participant self-reported autistic traits was also investigated. We found that regardless of whether the interaction was conducted live or via a video, participants frequently looked at the experimenter's face, and they did this more often when being asked a question than when answering. Critical differences in social attention between the live and video interactions were also observed. Modifications of experimenter eye contact influenced participants' eye movements in the live interaction only; and increased autistic traits were associated with less looking at the experimenter for video interactions only. We conclude that analysing patterns of eye-movements in response to strictly controlled video stimuli and natural real-world stimuli furthers the field's understanding of the factors that influence social attention
Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions
Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest
Contrasting vertical and horizontal representations of affect in emotional visual search
The final publication is available at Springer via http://dx.doi.org/ 10.3758/s13423-015-0884-6Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both of them. Here we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the ‘up=good’ metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the ‘up=good’ metaphor is more salient and readily activated than the ‘right=good’ metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings
The Role of Attention in a Joint-Action Effect
The most common explanation for joint-action effects has been the action co-representation account in which observation of another's action is represented within one's own action system. However, recent evidence has shown that the most prominent of these joint-action effects (i.e., the Social Simon effect), can occur when no co-actor is present. In the current work we examined whether another joint-action phenomenon (a movement congruency effect) can be induced when a participant performs their part of the task with a different effector to that of their co-actor and when a co-actor's action is replaced by an attention-capturing luminance signal. Contrary to what is predicted by the action co-representation account, results show that the basic movement congruency effect occurred in both situations. These findings challenge the action co-representation account of this particular effect and suggest instead that it is driven by bottom-up mechanisms
- …
