528 research outputs found
Phonological Factors Affecting L1 Phonetic Realization of Proficient Polish Users of English
Acoustic phonetic studies examine the L1 of Polish speakers with professional level proficiency in English. The studies include two tasks, a production task carried out entirely in Polish and a phonetic code-switching task in which speakers insert target Polish words or phrases into an English carrier. Additionally, two phonetic parameters are studied: the oft-investigated VOT, as well as glottalization vs. sandhi linking of word-initial vowels. In monolingual Polish mode, L2 interference was observed for the VOT parameter, but not for sandhi linking. It is suggested that this discrepancy may be related to the differing phonological status of the two phonetic parameters. In the code-switching tasks, VOTs were on the whole more English-like than in monolingual mode, but this appeared to be a matter of individual performance. An increase in the rate of sandhi linking in the code-switches, except for the case of one speaker, appeared to be a function of accelerated production of L1 target items
On the universal structure of human lexical semantics
How universal is human conceptual structure? The way concepts are organized
in the human brain may reflect distinct features of cultural, historical, and
environmental background in addition to properties universal to human
cognition. Semantics, or meaning expressed through language, provides direct
access to the underlying conceptual structure, but meaning is notoriously
difficult to measure, let alone parameterize. Here we provide an empirical
measure of semantic proximity between concepts using cross-linguistic
dictionaries. Across languages carefully selected from a phylogenetically and
geographically stratified sample of genera, translations of words reveal cases
where a particular language uses a single polysemous word to express concepts
represented by distinct words in another. We use the frequency of polysemies
linking two concepts as a measure of their semantic proximity, and represent
the pattern of such linkages by a weighted network. This network is highly
uneven and fragmented: certain concepts are far more prone to polysemy than
others, and there emerge naturally interpretable clusters loosely connected to
each other. Statistical analysis shows such structural properties are
consistent across different language groups, largely independent of geography,
environment, and literacy. It is therefore possible to conclude the conceptual
structure connecting basic vocabulary studied is primarily due to universal
features of human cognition and language use.Comment: Press embargo in place until publicatio
Recommended from our members
Lusoga (Lutenga)
Lusoga is an interlacustrine Bantu language spoken in the eastern part of Uganda in the region of Busoga, which is surrounded by the Victoria Nile in the west, Lake Kyoga in the north, the River Mpologoma in the east and Lake Victoria in the south. According to the 2002 census, this language is spoken by slightly over two million people (UBOS 2006: 12)
Why Make Life Hard? Resolutions to Problems of Rare and Difficult Sound Types
Proceedings of the Twenty-Fourth Annual Meeting of the Berkeley
Linguistics Society: General Session and Parasession on Phonetics and
Phonological Universals (1998
Articulatory Phonology and Sukuma "Aspirated Nasals"
Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society: Special Session on African Language Structures (1991), pp. 145-15
Phonetics in the Field
it seems generally the case that little detail on specifically phonetic matters is provided in a typical grammar, nor is there much use of phonetic techniques to provide insights on other matters, such as adding precision to observations of phonological alternations or testing whether supposed syntactic ambiguities are actually disambiguated at the phonetic level. While syntactic patterns are documented with example sentences, often from natural discourse or texts, the phonetic facts are rarely if ever documented by the presentation of hard evidence. In order to see if this impression was justified a survey of twenty grammars published or submitted as doctoral dissertations in the period of a dozen years from 1989 to 2000 was conducted
Anticipatory coarticulation in Hungarian VnC sequences
The duration of the vowel and the nasal was analyzed in the casual pronunciation of Hungarian words containing the sequence V
n
.C, where ‘.’ is a syllable boundary and C is a stop, affricate, fricative, or approximant. It was found that due to anticipatory coarticulation the duration of
n
is significantly shorter before fricatives and approximants than before stops and affricates.A teaching algorithm was used to distinguish between stops/affricates and fricatives/approximants in V
n
C sequences. We used an approach to the classification of C by means of the support vector machine (SVM) and the properties of Radial basis function (RBF) kernel (using MATLAB, version 7.0). The results show close to 95% correct responses for the stop/affricate vs. fricative/approximant distinction of C, as opposed to about 60% correct responses for the classification of the voicing feature of C
Call me Alix, not Elix: vowels are more important than consonants in own-name recognition at 5 months.
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French-learning 5-month-olds by testing sensitivity to minimal phonetic changes in their own name. Infants' reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure
- …
