40 research outputs found

    A practical approach to language complexity: a wikipedia case study

    Get PDF
    In this paper we present statistical analysis of English texts from Wikipedia. We try to address the issue of language complexity empirically by comparing the simple English Wikipedia (Simple) to comparable samples of the main English Wikipedia (Main). Simple is supposed to use a more simplified language with a limited vocabulary, and editors are explicitly requested to follow this guideline, yet in practice the vocabulary richness of both samples are at the same level. Detailed analysis of longer units (n-grams of words and part of speech tags) shows that the language of Simple is less complex than that of Main primarily due to the use of shorter sentences, as opposed to drastically simplified syntax or vocabulary. Comparing the two language varieties by the Gunning readability index supports this conclusion. We also report on the topical dependence of language complexity, that is, that the language is more advanced in conceptual articles compared to person-based (biographical) and object-based articles. Finally, we investigate the relation between conflict and language complexity by analyzing the content of the talk pages associated to controversial and peacefully developing articles, concluding that controversy has the effect of reducing language complexity

    ReadNet: A Hierarchical Transformer Framework for Web Article Readability Analysis

    Full text link
    Analyzing the readability of articles has been an important sociolinguistic task. Addressing this task is necessary to the automatic recommendation of appropriate articles to readers with different comprehension abilities, and it further benefits education systems, web information systems, and digital libraries. Current methods for assessing readability employ empirical measures or statistical learning techniques that are limited by their ability to characterize complex patterns such as article structures and semantic meanings of sentences. In this paper, we propose a new and comprehensive framework which uses a hierarchical self-attention model to analyze document readability. In this model, measurements of sentence-level difficulty are captured along with the semantic meanings of each sentence. Additionally, the sentence-level features are incorporated to characterize the overall readability of an article with consideration of article structures. We evaluate our proposed approach on three widely-used benchmark datasets against several strong baseline approaches. Experimental results show that our proposed method achieves the state-of-the-art performance on estimating the readability for various web articles and literature.Comment: ECIR 202

    Readability Formula for Chinese as a Second Language

    No full text

    Predicting Text Readability with Personal Pronouns

    No full text
    Part 5: Perceptual IntelligenceInternational audienceWhile the classic Readability Formula exploits word and sentence length, we aim to test whether Personal Pronouns (PPs) can be used to predict text readability with similar accuracy or not. Out of this motivation, we first calculated readability score of randomly selected texts of nine genres from the British National Corpus (BNC). Then we used Multiple Linear Regression (MLR) to determine the degree to which readability could be explained by any of the 38 individual or combinational subsets of various PPs in their orthographical forms (including I, me, we, us, you, he, him, she, her (the Objective Case), it, they and them). Results show that (1) subsets of plural PPs can be more predicative than those of singular ones; (2) subsets of Objective forms can make better predictions than those of Subjective ones; (3) both the subsets of first- and third-person PPs show stronger predictive power than those of second-person PPs; (4) adding the article the to the subsets could only improve the prediction slightly. Reevaluation with resampled texts from BNC verify the practicality of using PPs as an alternative approach to predict text readability

    Influence of Term Familiarity in Readability of Spanish e-Government Web Information

    No full text
    It is well known that linguistic features of a written text affect its readability, understanding readability as the ease with which a reader can understand the text. This paper is focused on the analysing of the influence of some linguistic features on the readability of current Spanish e-Government websites. Specifically, the “familiarity” of the terms on web pages, as well as the “frequency” of these terms are studied, among others. Firstly, this research has analysed a corpus extracted from the current information websites of the Spanish eGovernment and its simplified counterparts. Then, using machine learning methods, a supervised model is built on the influence of different term familiarity lists on text readability in the corpus. Different term lists have been tested and it has been concluded that the differences between them have a great impact on their performance. An accuracy of 81% has been achieved with a combination of frequency lists. As a conclusion, term lists and the frequencies of the terms allow to determine to a high degree the difficulty of understanding the text.Work supported by the Spanish Ministry of Economy, Industry and Competitiveness, (CSO2017-86747-R)
    corecore