139 research outputs found
Effects of conversation content on viewing dyadic conversations
People typically follow conversations closely with their gaze. We asked whether this viewing is influenced by what is actually said in the conversation and by the viewer’s psychological condition. We recorded the eye movements of healthy (N = 16) and depressed (N = 25) participants while they were viewing video clips. Each video showed two people, each speaking one line of dialogue about socio-emotionally important (i.e., personal) or unimportant topics (matter-of-fact). Between the spoken lines, the viewers made more saccadic shifts between the discussants, and looked more at the second speaker, in personal vs. matter-of-fact conversations. Higher depression scores were correlated with less looking at the currently speaking discussant. We conclude that subtle social attention dynamics can be detected from eye movements and that these dynamics are sensitive to the observer’s psychological condition, such as depression
Passive exposure to speech sounds induces long-term memory representations in the auditory cortex of adult rats
Experience-induced changes in the functioning of the auditory cortex are prominent in early life, especially during a critical period. Although auditory perceptual learning takes place automatically during this critical period, it is thought to require active training in later life. Previous studies demonstrated rapid changes in single-cell responses of anesthetized adult animals while exposed to sounds presented in a statistical learning paradigm. However, whether passive exposure to sounds can form long-term memory representations remains to be demonstrated. To investigate this issue, we first exposed adult rats to human speech sounds for 3 consecutive days, 12 h/d. Two groups of rats exposed to either spectrotemporal or tonal changes in speech sounds served as controls for each other. Then, electrophysiological brain responses from the auditory cortex were recorded to the same stimuli. In both the exposure and test phase statistical learning paradigm, was applied. The exposure effect was found for the spectrotemporal sounds, but not for the tonal sounds. Only the animals exposed to spectrotemporal sounds differentiated subtle changes in these stimuli as indexed by the mismatch negativity response. The results point to the occurrence of long-term memory traces for the speech sounds due to passive exposure in adult animals.Peer reviewe
Event-related potentials to task-irrelevant changes in facial expressions
Abstract
Background
Numerous previous experiments have used oddball paradigm to study change detection. This paradigm is applied here to study change detection of facial expressions in a context which demands abstraction of the emotional expression-related facial features among other changing facial features.
Methods
Event-related potentials (ERPs) were recorded in adult humans engaged in a demanding auditory task. In an oddball paradigm, repeated pictures of faces with a neutral expression ('standard', p = .9) were rarely replaced by pictures with a fearful ('fearful deviant', p = .05) or happy ('happy deviant', p = .05) expression. Importantly, facial identities changed from picture to picture. Thus, change detection required abstraction of facial expression from changes in several low-level visual features.
Results
ERPs to both types of deviants differed from those to standards. At occipital electrode sites, ERPs to deviants were more negative than ERPs to standards at 150–180 ms and 280–320 ms post-stimulus. A positive shift to deviants at fronto-central electrode sites in the analysis window of 130–170 ms post-stimulus was also found. Waveform analysis computed as point-wise comparisons between the amplitudes elicited by standards and deviants revealed that the occipital negativity emerged earlier to happy deviants than to fearful deviants (after 140 ms versus 160 ms post-stimulus, respectively). In turn, the anterior positivity was earlier to fearful deviants than to happy deviants (110 ms versus 120 ms post-stimulus, respectively).
Conclusion
ERP amplitude differences between emotional and neutral expressions indicated pre-attentive change detection of facial expressions among neutral faces. The posterior negative difference at 150–180 ms latency resembled visual mismatch negativity (vMMN) – an index of pre-attentive change detection previously studied only to changes in low-level features in vision. The positive anterior difference in ERPs at 130–170 ms post-stimulus probably indexed pre-attentive attention orienting towards emotionally significant changes. The results show that the human brain can abstract emotion related features of faces while engaged to a demanding task in another sensory modality.peerReviewe
Explicit behavioral detection of visual changes develops without their implicit neurophysiological detectability
Change blindness is a failure of reporting major changes across consecutive images if separated, e.g., by a brief blank interval. Successful change detection across interrupts requires focal attention to the changes. However, findings of implicit detection of visual changes during change blindness have raised the question of whether the implicit mode is necessary for development of the explicit mode. To this end, we recorded the visual mismatch negativity (vMMN) of the event-related potentials (ERPs) of the brain, an index of implicit pre-attentive visual change detection, in adult humans performing an oddball-variant of change blindness flicker task. Images of 500 ms in duration were presented repeatedly in continuous sequences, alternating with a blank interval (either 100 ms or 500 ms in duration throughout a stimulus sequence). Occasionally (P = 0.2), a change (referring to color changes, omissions, or additions of objects or their parts in the image) was present. The participants attempted to explicitly (via voluntary button press) detect the occasional change. With both interval durations, it took 10–15 change presentations in average for the participants to eventually detect the changes explicitly in a sequence, the 500 ms interval only requiring a slightly longer exposure to the series than the 100 ms one. Nevertheless, prior to this point of explicit detectability, the implicit detection of the changes vMMN could only be observed with the 100 ms intervals. These findings of explicit change detection developing with and without implicit change detection may suggest that the two modes of change detection recruit independent neural mechanisms
Encoding specificity instead of online integration of real-world spatial regularities for objects in working memory
AbstractMost objects show high degrees of spatial regularity (e.g. beach umbrellas appear above, not under, beach chairs). The spatial regularities of real-world objects benefit visual working memory (VWM), but the mechanisms behind this spatial regularity effect remain unclear. The “encoding specificity” hypothesis suggests that spatial regularity will enhance the visual encoding process but will not facilitate the integration of information online during VWM maintenance. The “perception-alike” hypothesis suggests that spatial regularity will function in both visual encoding and online integration during VWM maintenance. We investigated whether VWM integrates sequentially presented real-world objects by focusing on the existence of the spatial regularity effect. Throughout five experiments, we manipulated the presentation (simultaneous vs. sequential) and regularity (with vs. without regularity) of memory arrays among pairs of real-world objects. The spatial regularity of memory objects presented simultaneously, but not sequentially, improved VWM performance. We also examined whether memory load, verbal suppression and masking, and memory array duration hindered the spatial regularity effect in sequential presentation. We found a stable absence of the spatial regularity effect, suggesting that the participants were unable to integrate real-world objects based on spatial regularities online. Our results support the encoding specificity hypothesis, wherein the spatial regularity of real-world objects can enhance the efficiency of VWM encoding, but VWM cannot exploit spatial regularity to help organize sampled sequential information into meaningful integrations.Abstract
Most objects show high degrees of spatial regularity (e.g. beach umbrellas appear above, not under, beach chairs). The spatial regularities of real-world objects benefit visual working memory (VWM), but the mechanisms behind this spatial regularity effect remain unclear. The “encoding specificity” hypothesis suggests that spatial regularity will enhance the visual encoding process but will not facilitate the integration of information online during VWM maintenance. The “perception-alike” hypothesis suggests that spatial regularity will function in both visual encoding and online integration during VWM maintenance. We investigated whether VWM integrates sequentially presented real-world objects by focusing on the existence of the spatial regularity effect. Throughout five experiments, we manipulated the presentation (simultaneous vs. sequential) and regularity (with vs. without regularity) of memory arrays among pairs of real-world objects. The spatial regularity of memory objects presented simultaneously, but not sequentially, improved VWM performance. We also examined whether memory load, verbal suppression and masking, and memory array duration hindered the spatial regularity effect in sequential presentation. We found a stable absence of the spatial regularity effect, suggesting that the participants were unable to integrate real-world objects based on spatial regularities online. Our results support the encoding specificity hypothesis, wherein the spatial regularity of real-world objects can enhance the efficiency of VWM encoding, but VWM cannot exploit spatial regularity to help organize sampled sequential information into meaningful integrations
Decreased intersubject synchrony in dynamic valence ratings of sad movie contents in dysphoric individuals
Emotional reactions to movies are typically similar between people. However, depressive symptoms decrease synchrony in brain responses. Less is known about the effect of depressive symptoms on intersubject synchrony in conscious stimulus-related processing. In this study, we presented amusing, sad and fearful movie clips to dysphoric individuals (those with elevated depressive symptoms) and control participants to dynamically rate the clips' valences (positive vs. negative). We analysed both the valence ratings' mean values and intersubject correlation (ISC). We used electrodermal activity (EDA) to complement the measurement in a separate session. There were no group differences in either the EDA or mean valence rating values for each movie type. As expected, the valence ratings' ISC was lower in the dysphoric than the control group, specifically for the sad movie clips. In addition, there was a negative relationship between the valence ratings' ISC and depressive symptoms for sad movie clips in the full sample. The results are discussed in the context of the negative attentional bias in depression. The findings extend previous brain activity results of ISC by showing that depressive symptoms also increase variance in conscious ratings of valence of stimuli in a mood-congruent manner.Peer reviewe
The effect of sad mood on early sensory event-related potentials to task-irrelevant faces
It has been shown that the perceiver's mood affects the perception of emotional faces, but it is not known how mood affects preattentive brain responses to emotional facial expressions. To examine the question, we experimentally induced sad and neutral mood in healthy adults before presenting them with task-irrelevant pictures of faces while an electroencephalography was recorded. Sad, happy, and neutral faces were presented to the participants in an ignore oddball condition. Differential responses (emotional – neutral) for the P1, N170, and P2 amplitudes were extracted and compared between neutral and sad mood conditions. Emotional facial expressions modulated all the components, and an interaction effect of expression by mood was found for P1: an emotional modulation to happy faces, which was found in neutral mood condition, disappeared in sad mood condition. For N170 and P2, we found larger response amplitudes for both emotional faces, regardless of the mood. The results add to the previous behavioral findings showing that mood already affects low-level cortical feature encoding of task-irrelevant faces.publishedVersionPeer reviewe
Electrophysiological evidence for change detection in speech sound patterns by anesthetized rats
Peer reviewe
Electrophysiological evidence for change detection in speech sound patterns by anesthetized rats
Social interactions through the eyes of macaques and humans
Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys’ gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans
- …
