166 research outputs found
Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production
Segmentation of Signs for Research Purposes: Comparing Humans and Machines
Wordnets have been a popular lexical resource type for many years. Their sense-based representation of lexical items and numerous
relation structures have been used for a variety of computational and linguistic applications. The inclusion of different wordnets into
multilingual wordnet networks has further extended their use into the realm of cross-lingual research. Wordnets have been released for
many spoken languages. Research has also been carried out into the creation of wordnets for several sign languages, but none have yet
resulted in publicly available datasets. This article presents our own efforts towards an inclusion of sign languages in a multilingual
wordnet, starting with Greek Sign Language (GSL) and German Sign Language (DGS). Based on differences in available language
resources between GSL and DGS, we trial two workflows with different coverage priorities. We also explore how synergies between
both workflows can be leveraged and how future work on additional sign languages could profit from building on existing sign language
wordnet data. The results of our work are made publicly available
The impact of text segmentation on subtitle reading
Understanding the way people watch subtitled films has become a central concern for subtitling researchers in recent years. Both subtitling scholars and professionals generally believe that in order to reduce cognitive load and enhance readability, line breaks in two-line subtitles should follow syntactic units. However, previous research has been inconclusive as to whether syntactic-based segmentation facilitates comprehension and reduces cognitive load. In this study, we assessed the impact of text segmentation on subtitle processing among different groups of viewers: hearing people with different mother tongues (English, Polish, and Spanish) and deaf, hard of hearing, and hearing people with English as a first language. We measured three indicators of cognitive load (difficulty, effort, and frustration) as well as comprehension and eye tracking variables. Participants watched two video excerpts with syntactically and non-syntactically segmented subtitles. The aim was to determine whether syntactic-based text segmentation as well as the viewers’ linguistic background influence subtitle processing. Our findings show that non-syntactically segmented subtitles induced higher cognitive load, but they did not adversely affect comprehension. The results are discussed in the context of cognitive load, audiovisual translation, and deafness
Machine Learning for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language
Real-time hand movement trajectory tracking based on machine learning approaches may assist the early identification of dementia in ageing deaf individuals who are users of British Sign Language (BSL), since there are few clinicians with appropriate communication skills, and a shortage of sign language interpreters. In this paper, we introduce an automatic dementia screening system for ageing Deaf signers of BSL, using a Convolutional Neural Network (CNN) to analyse the sign space envelope and facial expression of BSL signers recorded in normal 2D videos from the BSL corpus. Our approach involves the introduction of a sub-network (the multi-modal feature extractor) which includes an accurate real-time hand trajectory tracking model and a real-time landmark facial motion analysis model. The experiments show the effectiveness of our deep learning based approach in terms of sign space tracking, facial motion tracking and early stage dementia performance assessment tasks
Picture naming in deaf children from deaf and hearing families compared with spontaneous gestures produced by hearing children: the influence of iconicity
Recommended from our members
Detecting cognitive impairment and dementia in Deaf people: The British Sign Language Cognitive Screening Test
To provide accurate diagnostic screening of deaf people who use signed communication, cognitive tests must be devised in signed languages with normative deaf samples. This article describes the development of the first screening test for the detection of cognitive impairment and dementia in deaf signers. The British Sign Language Cognitive Screening Test uses standardized video administration to screen cognition using signed, rather than spoken or written, instructions and a large norm-referenced sample of 226 deaf older people. Percentiles are provided for clinical comparison. The tests showed good reliability, content validity, and correlation with age, intellectual ability, and education. Clinical discrimination was shown between the normative sample and 14 deaf patients with dementia. This innovative testing approach transforms the ability to detect dementia in deaf people, avoids the difficulties of using an interpreter, and enables culturally and linguistically sensitive assessment of deaf signers, with international potential for adaptation into other signed languages
Deaf and hearing children’s picture naming:impact of age of acquisition and language modality on representational gesture
Nomeação de imagens em crianças surdas de famílias surdas e ouvintes em comparação com a produção de gestos não linguísticos espontâneos produzidos por crianças ouvintes: a influência da iconicidade
Language impairments in the development of sign: Do they reside in a specific modality or are they modality-independent deficits?
Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways – for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a sign language, provide a unique opportunity to contrast abilities across language in two modalities (cross-modal bilingualism). The aim of the article is to examine what developmental sign language impairments can tell us about the relationship between language impairments and modality. A series of individual and small group studies are presented here illustrating language impairments in sign language users and cross-modal bilinguals, comprising Landau-Kleffner syndrome, Williams syndrome, Down syndrome, Autism and SLI. We conclude by suggesting how studies of sign language impairments can assist researchers to explore how different language impairments originate from different parts of the cognitive, linguistic and perceptual systems
- …
