88 research outputs found

    AAPOR Report on Big Data

    Get PDF
    In recent years we have seen an increase in the amount of statistics in society describing different phenomena based on so called Big Data. The term Big Data is used for a variety of data as explained in the report, many of them characterized not just by their large volume, but also by their variety and velocity, the organic way in which they are created, and the new types of processes needed to analyze them and make inference from them. The change in the nature of the new types of data, their availability, the way in which they are collected, and disseminated are fundamental. The change constitutes a paradigm shift for survey research.There is a great potential in Big Data but there are some fundamental challenges that have to be resolved before its full potential can be realized. In this report we give examples of different types of Big Data and their potential for survey research. We also describe the Big Data process and discuss its main challenges

    A general approach for estimating scale score reliability for panel survey data.

    Get PDF
    Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. To assess data quality, reliability estimation is usually an integral step in the analysis of scale score data. Cronbach’s α is a widely used indicator of reliability but, due to its rather strong assumptions, can be a poor estimator (Cronbach, 1951). For longitudinal data, an alternative approach is the simplex method; however, it too requires assumptions that may not hold in practice. One effective approach is an alternative estimator of reliability that relaxes the assumptions of both Cronbach’s α and the simplex estimator and, thus, generalizes both estimators. Using data from a large-scale panel survey, the benefits of the statistical properties of this estimator are investigated and its use is illustrated and compared with the more traditional estimators of reliability

    Sampling and Mixed-Mode Survey Design

    Get PDF
    This document summarizes the Wave V sampling and mixed-mode survey design. Whenever possible, data collection and methods in Wave V mirrored those of Wave IV to ensure comparability of data between waves. This document is one in a set of Wave V user guides

    In Memory of Dr. Lars Lyberg. Remembering a Giant in Survey Research (1944-2021)

    Get PDF
    On August 8, 2021, survey statisticians, survey methodologists, and survey researchersgathered virtually at the Joint Statistical Meetings of the American Statistical Association(JSM) to remember the amazing life of Lars Lyberg. This articles presents the commentsand memories shared by some of Lars’ closest colleagues at this memorial session. The co-authors dedicate this article to Lars in the hope that his work, his contributions, and hiscollaborative spirit will live on indefinitely

    Methodology of Correcting Nonresponse Bias: Introducing Another Bias? The Case of the Swiss Innovation Survey 2002

    Full text link
    The non-response in a survey can lead to severe bias. In order to manage this problem, it is usual to make a second survey by a sample of non-respondent. This allows us to test if there is a significant difference in the key variables of the survey between respondents and nonrespondents and, if yes, to take it into account. But, the risk is great to introduce another bias depending on the mode (mail vs phone) of survey. The KOF industrial economics group is exploring for many years the innovation behaviour of Swiss firms using a mail survey addressed to almost 6600 panel firms of the industrial, construction and service sector. We use since some years the data of a second survey by nonrespondents to correct non-response bias. Contrarily to the first survey, this one is made by phone. One can suspect that the personal interaction with the person(s) calling may be introducing another bias. In order to investigate this question, in the case of the ETH Zurich's innovation 2002 survey, we decided next to the regular non-respondent-phone-survey, to conduct a similar phone survey by a subsample of the respondent-group. Thus, we dispose of data for the same variables coming from the two modes of survey and allowing us to show if there is a difference or not in the response behaviour. We use different statistical approaches to investigate this issue, considering x2-test and Logit models. Our results show that data collection method may influence the response

    METHODOLOGY FOR OPTIMAL DUAL FRPME SWPLE DESIGN

    No full text
    This series contains research reports, written by or in cooperation with staff members of the Statistical Research Division, whose content may be of interest to the general statistical research community. The views re-flected in these reports are not necessarily those of the Census Bureau nor do they necessarily represent Census Bureau statistical policy or prac-tice. Inquiries may be addressed to the author(s) or the SRD Report Serie

    Modeling Measurement Error to Identify Flawed Questions

    No full text

    Measurement Errors in Sample Surveys

    No full text

    Processing of Survey Data

    Full text link

    Total Survey Error

    Full text link
    corecore