382 research outputs found
Laminar mixing of heterogeneous axisymmetric coaxial confined jets Final report
Laminar mixing of heterogeneous axisymmetrical coaxial confined jets for application to nuclear rocket propulsio
Recommended from our members
How many voices did you hear? Natural variability disrupts identity perception from unfamiliar voices
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within‐person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ‘tell people apart’ (perceiving exemplars from two different speakers as separate identities) but also ‘tell people together’ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within‐person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in ‘telling people together’. Our study highlights within‐person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re‐evaluation of theoretical models to account for natural variability during identity perception
Recommended from our members
Similar representations of emotions across faces and voices
Emotions are a vital component of social communication, carried across a range of modalities and via
different perceptual signals such as specific muscle contractions in the face and in the upper
respiratory system. Previous studies have found that emotion recognition impairments after brain
damage depend on the modality of presentation: recognition from faces may be impaired whilst
recognition from voices remains preserved, and vice versa. On the other hand, there is also evidence
for shared neural activation during emotion processing in both modalities. In a behavioural study, we
investigated whether there are shared representations in the recognition of emotions from faces and
voices. We used a within-subjects design in which participants rated the intensity of facial expressions
and non-verbal vocalisations for each of the six basic emotion labels. For each participant and each
modality, we then computed a representation matrix with the intensity ratings of each emotion. These
matrices allowed us to examine the patterns of confusions between emotions and to characterise the
representations of emotions within each modality. We then compared the representations across
modalities by computing the correlations of the representation matrices across faces and voices. We
found highly correlated matrices across modalities, which suggest similar representations of emotions
across faces and voices. We also showed that these results could not be explained by commonalities
between low-level visual and acoustic properties of the stimuli. We thus propose that there are similar
or shared coding mechanisms for emotions which may act independently of modality, despite their
distinct perceptual inputs.This research was supported by an ESRC 1+3 PhD studentship to Lisa Kuhn (ES/I90042X/1)
Idiosyncratic and shared contributions shape impressions from voices and faces
Voices elicit rich first impressions of what the person we are hearing might be like. Research stresses that these impressions from voices are shared across different listeners, such that people on average agree which voices sound trustworthy or old and which do not. However, can impressions from voices also be shaped by the ‘ear of the beholder’? We investigated whether - and how - listeners' idiosyncratic, personal preferences contribute to first impressions from voices. In two studies (993 participants, 156 voices), we find evidence for substantial idiosyncratic contributions to voice impressions using a variance portioning approach. Overall, idiosyncratic contributions were as important as shared contributions to impressions from voices for inferred person characteristics (e.g., trustworthiness, friendliness). Shared contributions were only more influential for impressions of more directly apparent person characteristics (e.g., gender, age). Both idiosyncratic and shared contributions were reduced when stimuli were limited in their (perceived) variability, suggesting that natural variation in voices is key to understanding this impression formation. When comparing voice impressions to face impressions, we found that idiosyncratic and shared contributions to impressions similarly across modality when stimulus properties are closely matched - although voice impressions were overall less consistent than face impressions. We thus reconceptualise impressions from voices as being formed not only based on shared but also idiosyncratic contributions. We use this new framing to suggest future directions of research, including understanding idiosyncratic mechanisms, development, and malleability of voice impression formation
Comparing unfamiliar voice and face identity perception using identity sorting tasks
Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to “tell people together.” These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participants’ performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on “same identity” trials with “telling people together” for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes
Highly accurate and robust identity perception from personally familiar voices
Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being “familiar enough” with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices – acquired via substantial, naturalistic experience – should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participants’ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate
Recommended from our members
Highly Accurate and Robust Identity Perception From Personally Familiar Voices
Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being “familiar enough” with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices – acquired via substantial, naturalistic experience – should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participants’ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate
Recommended from our members
Comparing unfamiliar voice and face identity perception using identity sorting tasks.
Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to "tell people together." These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participants' performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on "same identity" trials with "telling people together" for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes
Speed and Accuracy of Static Image Discrimination by Rats
When discriminating dynamic noisy sensory signals, human and primate subjects
achieve higher accuracy when they take more time to decide, an effect
attributed to accumulation of evidence over time to overcome neural noise. We
measured the speed and accuracy of twelve freely behaving rats discriminating
static, high contrast photographs of real-world objects for water reward in a
self-paced task. Response latency was longer in correct trials compared to
error trials. Discrimination accuracy increased with response latency over the
range of 500-1200ms. We used morphs between previously learned images to vary
the image similarity parametrically, and thereby modulate task difficulty from
ceiling to chance. Over this range we find that rats take more time before
responding in trials with more similar stimuli. We conclude that rats'
perceptual decisions improve with time even in the absence of temporal
information in the stimulus, and that rats modulate speed in response to
discrimination difficulty to balance speed and accuracy
The effects of high variability training on voice identity learning.
High variability training has been shown to benefit the learning of new face identities. In three experiments, we investigated whether this is also the case for voice identity learning. In Experiment 1a, we contrasted high variability training sets - which included stimuli extracted from a number of different recording sessions, speaking environments and speaking styles - with low variability stimulus sets that only included a single speaking style (read speech) extracted from one recording session (see Ritchie & Burton, 2017 for faces). Listeners were tested on an old/new recognition task using read sentences (i.e. test materials fully overlapped with the low variability training stimuli) and we found a high variability disadvantage. In Experiment 1b, listeners were trained in a similar way, however, now there was no overlap in speaking style or recording session between training sets and test stimuli. Here, we found a high variability advantage. In Experiment 2, variability was manipulated in terms of the number of unique items as opposed to number of unique speaking styles. Here, we contrasted the high variability training sets used in Experiment 1a with low variability training sets that included the same breadth of styles, but fewer unique items; instead, individual items were repeated (see Murphy, Ipser, Gaigg, & Cook, 2015 for faces). We found only weak evidence for a high variability advantage, which could be explained by stimulus-specific effects. We propose that high variability advantages may be particularly pronounced when listeners are required to generalise from trained stimuli to different-sounding, previously unheard stimuli. We discuss these findings in the context of mechanisms thought to underpin advantages for high variability training
- …
