161 research outputs found

    Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News

    Full text link
    This work presents an audit study of Apple News as a sociotechnical news curation system that exercises gatekeeping power in the media. We examine the mechanisms behind Apple News as well as the content presented in the app, outlining the social, political, and economic implications of both aspects. We focus on the Trending Stories section, which is algorithmically curated, and the Top Stories section, which is human-curated. Results from a crowdsourced audit showed minimal content personalization in the Trending Stories section, and a sock-puppet audit showed no location-based content adaptation. Finally, we perform an extended two-month data collection to compare the human-curated Top Stories section with the algorithmically curated Trending Stories section. Within these two sections, human curation outperformed algorithmic curation in several measures of source diversity, concentration, and evenness. Furthermore, algorithmic curation featured more "soft news" about celebrities and entertainment, while editorial curation featured more news about policy and international events. To our knowledge, this study provides the first data-backed characterization of Apple News in the United States.Comment: Preprint, to appear in Proceedings of the Fourteenth International AAAI Conference on Web and Social Media (ICWSM 2020

    Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment

    Full text link
    Gaining insight into the potential negative impacts of emerging Artificial Intelligence (AI) technologies in society is a challenge for implementing anticipatory governance approaches. One approach to produce such insight is to use Large Language Models (LLMs) to support and guide experts in the process of ideating and exploring the range of undesirable consequences of emerging technologies. However, performance evaluations of LLMs for such tasks are still needed, including examining the general quality of generated impacts but also the range of types of impacts produced and resulting biases. In this paper, we demonstrate the potential for generating high-quality and diverse impacts of AI in society by fine-tuning completion models (GPT-3 and Mistral-7B) on a diverse sample of articles from news media and comparing those outputs to the impacts generated by instruction-based (GPT-4 and Mistral-7B-Instruct) models. We examine the generated impacts for coherence, structure, relevance, and plausibility and find that the generated impacts using Mistral-7B, a small open-source model fine-tuned on impacts from the news media, tend to be qualitatively on par with impacts generated using a more capable and larger scale model such as GPT-4. Moreover, we find that impacts produced by instruction-based models had gaps in the production of certain categories of impacts in comparison to fine-tuned models. This research highlights a potential bias in the range of impacts generated by state-of-the-art LLMs and the potential of aligning smaller LLMs on news media as a scalable alternative to generate high quality and more diverse impacts in support of anticipatory governance approaches.Comment: 10 pages + research ethics and social impact statement, references, and appendix. Under conference revie

    Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making

    Get PDF
    Social media platforms have increasingly become an important way for news organizations to distribute content to their audiences. As news organizations relinquish control over distribution, they may feel the need to optimize their content to align with platform logics to ensure economic sustainability. However, the opaque and often proprietary nature of platform algorithms makes it hard for news organizations to truly know what kinds of content are preferred and will perform well. Invoking the concept of algorithmic ‘folk theories,’ this article presents a study of in-depth, semi-structured interviews with 18 U.S.-based news journalists and editors to understand how they make sense of social media algorithms, and to what extent this influences editorial decision making. Our findings suggest that while journalists’ understandings of platform algorithms create new considerations for gatekeeping practices, the extent to which it influences those practices is often negotiated against traditional journalistic conceptions of newsworthiness and journalistic autonomy

    Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism

    Full text link
    Science and technology journalists today face challenges in finding newsworthy leads due to increased workloads, reduced resources, and expanding scientific publishing ecosystems. Given this context, we explore computational methods to aid these journalists' news discovery in terms of time-efficiency and agency. In particular, we prototyped three computational information subsidies into an interactive tool that we used as a probe to better understand how such a tool may offer utility or more broadly shape the practices of professional science journalists. Our findings highlight central considerations around science journalists' agency, context, and responsibilities that such tools can influence and could account for in design. Based on this, we suggest design opportunities for greater and longer-term user agency; incorporating contextual, personal and collaborative notions of newsworthiness; and leveraging flexible interfaces and generative models. Overall, our findings contribute a richer view of the sociotechnical system around computational news discovery tools, and suggest ways to improve such tools to better support the practices of science journalists.Comment: To be published in CSCW 202

    Anticipating Impacts: Using Large-Scale Scenario Writing to Explore Diverse Implications of Generative AI in the News Environment

    Full text link
    The tremendous rise of generative AI has reached every part of society - including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still in its infancy and mostly limited to the views of technology developers and/or researchers. In this paper, we aim to broaden the perspective and capture the expectations of three stakeholder groups (news consumers; technology developers; content creators) about the potential negative impacts of generative AI, as well as mitigation strategies to address these. Methodologically, we apply scenario writing and use participatory foresight in the context of a survey (n=119) to delve into cognitively diverse imaginations of the future. We qualitatively analyze the scenarios using thematic analysis to systematically map potential impacts of generative AI on the news environment, potential mitigation strategies, and the role of stakeholders in causing and mitigating these impacts. In addition, we measure respondents' opinions on a specific mitigation strategy, namely transparency obligations as suggested in Article 52 of the draft EU AI Act. We compare the results across different stakeholder groups and elaborate on the (non-) presence of different expected impacts across these groups. We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment
    corecore