2,038 research outputs found
Agreeing to disagree : constant non-alignment of speech gestures in dialogue
Numerous studies suggest that interlocutors in a dialogue align with each other in terms of their articulatory gestures. It is often suggested that this, first, is the consequence of an automatic tendency for imitation and, second, this fosters mutual understanding. Making use of online archives of media, it was tested whether alignment is hence inevitable. The focus was on the pronunciation of the German word. The standard pronunciation is, but speakers with a Swabian accent produce, acoustically reflected in the fricative spectra. We measured the spectra of fricatives in from interviewers while interviewing either a prominent German politician using the Swabian variant or an interviewee using the standard variant. Results showed neither an overall influence of the interviewees' pronunciation on the fricative realization by the interviewer nor a tendency to align over time for interviewer-interviewee pairs with different pronunciations. This shows that phonetic alignment in conversation is a more complex process than most current theories seem to suggest. Moreover, failure to align may not impede mutual understanding.peer-reviewe
The perception of English front vowels by North Holland and Flemish listeners: acoustic similarity predicts and explains cross-linguistic and L2 perception
We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns
Is vowel normalization independent of lexical processing?
The author wishes to thank James McQueen, Klaus Kohler, Randy Diehl, and an anonymous
reviewer for comments on an earlier version of this article and Marloes van der Goot, Laurance Bruggeman, and Jet Sueters for running the experiments.Vowel normalization in speech perception was investigated in three experiments.
The range of the second formant in a carrier phrase was manipulated and this
affected the perception of a target vowel in a compensatory fashion: A low F2 range
in the carrier phrase made it more likely that the target vowel was perceived as a
front vowel, that is, with a high F2. Recent experiments indicated that this effect
might be moderated by the lexical status of the constituents of the carrier phrase.
Manipulation of the lexical status in the present experiments, however, did not
affect vowel normalization. In contrast, the range of vowels in the carrier phrase did
influence vowel normalization. If the carrier phrase consisted of mid-to-high front
vowels only, vowel categories shifted only for mid-to-high front vowels. It is argued
that these results are a challenge for episodic models of word recognition.peer-reviewe
On the causes of compensation for coarticulation : evidence for phonological mediation
This study examined whether compensation for coarticulation in fricative-vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative-vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.peer-reviewe
How phonological reductions sometimes help the listener
In speech production, high-frequency words are more likely than low-frequency words to be phonologically
reduced. We tested in an eye-tracking experiment whether listeners can make use of this correlation
between lexical frequency and phonological realization of words. Participants heard prefixed verbs in
which the prefix was either fully produced or reduced. Simultaneously, they saw a high-frequency verb
and a low-frequency verb with this prefix—plus 2 distractors—on a computer screen. Participants were
more likely to look at the high-frequency verb when they heard a reduced prefix than when they heard
a fully produced prefix. Listeners hence exploit the correlation of lexical frequency and phonological
reduction and assume that a reduced prefix is more likely to belong to a high-frequency word. This shows
that reductions do not necessarily burden the listener but may in fact have a communicative function, in
line with functional theories of phonology.peer-reviewe
The role of native-language knowledge in the perception of casual speech in a second language
Casual speech processes, such as /t/-reduction, makeword recognition harder. Additionally,
word recognition is also harder in a second language (L2). Combining these challenges,
we investigated whether L2 learners have recourse to knowledge from their native language
(L1) when dealing with casual speech processes in their L2. In three experiments,
production and perception of /t/-reduction was investigated. An initial production experiment
showed that /t/-reduction occurred in both languages and patterned similarly in
proper nouns but differed when /t/ was a verbal inflection. Two perception experiments
compared the performance of German learners of Dutch with that of native speakers
for nouns and verbs. Mirroring the production patterns, German learners’ performance
strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word
stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual
speech process in a second language is problematic for learners when the process is not
known from the leaner’s native language, similar to what has been observed for phoneme
contrasts.peer-reviewe
The mental lexicon is fully specified : evidence from eye-tracking
Four visual-world experiments, in which listeners heard spoken words and saw printed words,
compared an optimal-perception account with the theory of phonological underspecification.
This theory argues that default phonological features are not specified in the mental lexicon,
leading to asymmetric lexical matching: Mismatching input ("pin") activates lexical entries
with underspecified coronal stops ('tin'), but lexical entries with specified labial stops ('pin') are
not activated by mismatching input ("tin"). The eye-tracking data failed to show such a pattern.
Although words that were phonologically similar to the spoken target attracted more looks than
unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs ("tin"-
"pin") and in Experiments 2 and 3 with words with an onset overlap ("peacock" - "teacake").
Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input
mismatched only in terms of place than if it mismatched in place and voice, contrary to the
assumption that /t/ is unspecified for place and voice. These results show that speech
perception uses signal-driven information to the fullest, as predicted by an optimal perception
account.peer-reviewe
Coping with phonological assimilation in speech perception : evidence for early compensation
The pronunciation of the same word may vary considerably as a consequence of its context. The
Dutch word tuin (English, garden) may be pronounced tuim if followed by bank (English, bench), but
not if followed by stoel (English, chair). In a series of four experiments, we examined how Dutch listeners
cope with this context sensitivity in their native language. A first word identification experiment
showed that the perception of a word-final nasal depends on the subsequent context. Viable assimilations,
but not unviable assimilations, were often confused perceptually with canonical word forms in
a word identification task. Two control experiments ruled out the possibility that this effect was caused
by perceptual masking or was influenced by lexical top-down effects. A passive-listening study in which
electrophysiological measurements were used showed that only unviable, but not viable, phonological
changes elicited a significant mismatch negativity. The results indicate that phonological assimilations
are dealt with by an early prelexical mechanism.peer-reviewe
The role of perceptual integration in the recognition of assimilated word forms
We investigated how spoken words are recognized when they have been altered by phonological
assimilation. Previous research has shown that there is a process of perceptual compensation for
phonological assimilations. Three recently formulated proposals regarding the mechanisms for compensation
for assimilation make different predictions with regard to the level at which compensation is
supposed to occur as well as regarding the role of specific language experience. In the present study,
Hungarian words and nonwords, in which a viable and an unviable liquid assimilation was applied,
were presented to Hungarian and Dutch listeners in an identification task and a discrimination
task. Results indicate that viably changed forms are difficult to distinguish from canonical forms independent
of experience with the assimilation rule applied in the utterances. This reveals that auditory
processing contributes to perceptual compensation for assimilation, while language experience has
only a minor role to play when identification is required.peer-reviewe
Correlation versus causation in multisensory perception
This research was supported in part by an Innovational Research Incentive
Scheme Veni Grant awarded to A.J. by the Netherlands Organization
for Scientific Research (NWO). The authors thank Sabrina Jung
for help with the preparation of the materials and Lies Cuijpers for help
with conducting the experiments.Events are often perceived in multiple modalities. The co-occurring proximal visual and auditory stimuli events are mostly also causally linked to the distal event, which makes it difficult to evaluate whether learned correlation or perceived causation guides binding in multisensory perception. Piano tones are an interesting exception: They are associated with the act of the pianist striking keys, an event that is visible to the perceiver, but directly results from hammers hitting strings, an event that typically is not visible to the perceiver. We examined the influence of seeing the hammer or the keystroke on auditory temporal order judgments (TOJs). Participants judged the temporal order of a dog bark and a piano tone, while seeing the piano stroke shifted temporally relative to its audio signal. Visual lead increased “piano-first” responses in auditory TOJ, but more so if the associated keystroke was visible than if the sound-producing hammer was visible, even though both were equally visually salient. This provides evidence for a learning account of audiovisual perception.peer-reviewe
- …
