161 research outputs found
Recommended from our members
Algorithmic Accountability Reporting: On the Investigation of Black Boxes
How can we characterize the power that various algorithms may exert on us? And how can we better understand when algorithms might be wronging us? What should be the role of journalists in holding that power to account? In this report I discuss what algorithms are and how they encode power. I then describe the idea of algorithmic accountability, first examining how algorithms problematize and sometimes stand in tension with transparency. Next, I describe how reverse engineering can provide an alternative way to characterize algorithmic power by delineating a conceptual model that captures different investigative scenarios based on reverse engineering algorithms’ input-output relationships. I then provide a number of illustrative cases and methodological details on how algorithmic accountability reporting might be realized in practice. I conclude with a discussion about broader issues of human resources, legality, ethics, and transparency
Recommended from our members
Social media, surveillance and news work: On the apps promising journalists a “crystal ball”
Social media platforms are becoming an indispensable resource for journalists. Their use involves both direct interaction with the platforms themselves and, increasingly, the use of specialist third-party apps to find, filter, and follow content and contributors. This article explores some of the ways social media platforms, and their technological ecosystems, are infusing news work. A range of platforms and apps—including Geofeedia, Spike, and Twitter—were critically examined, and their use by trainee journalists (N=81) analysed. The results reveal how journalists can—and do—surveil social network users and their content via sophisticated, professional apps that are also utilised by the police and security forces. While journalists recognise the value of such apps in news work, they also have concerns, including about privacy and popularism. And although the participants in this study thought the apps they used could help with verification, there were warning signs that an over-reliance on the technology could develop, dulling journalists’ critical faculties
Recommended from our members
Algorithms, Automation, and News
This special issue examines the growing importance of algorithms and automation in the gathering, composition, and distribution of news. It connects a long line of research on journalism and computation with scholarly and professional terrain yet to be explored. Taken as a whole, these articles share some of the noble ambitions of the pioneering publications on ‘reporting algorithms’, such as a desire to see computing help journalists in their watchdog role by holding power to account. However, they also go further, firstly by addressing the fuller range of technologies that computational journalism now consists of: from chatbots and recommender systems, to artificial intelligence and atomised journalism. Secondly, they advance the literature by demonstrating the increased variety of uses for these technologies, including engaging underserved audiences, selling subscriptions, and recombining and re-using content. Thirdly, they problematize computational journalism by, for example, pointing out some of the challenges inherent in applying AI to investigative journalism and in trying to preserve public service values. Fourthly, they offer suggestions for future research and practice, including by presenting a framework for developing democratic news recommenders and another that may help us think about computational journalism in a more integrated, structured manner
Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News
This work presents an audit study of Apple News as a sociotechnical news
curation system that exercises gatekeeping power in the media. We examine the
mechanisms behind Apple News as well as the content presented in the app,
outlining the social, political, and economic implications of both aspects. We
focus on the Trending Stories section, which is algorithmically curated, and
the Top Stories section, which is human-curated. Results from a crowdsourced
audit showed minimal content personalization in the Trending Stories section,
and a sock-puppet audit showed no location-based content adaptation. Finally,
we perform an extended two-month data collection to compare the human-curated
Top Stories section with the algorithmically curated Trending Stories section.
Within these two sections, human curation outperformed algorithmic curation in
several measures of source diversity, concentration, and evenness. Furthermore,
algorithmic curation featured more "soft news" about celebrities and
entertainment, while editorial curation featured more news about policy and
international events. To our knowledge, this study provides the first
data-backed characterization of Apple News in the United States.Comment: Preprint, to appear in Proceedings of the Fourteenth International
AAAI Conference on Web and Social Media (ICWSM 2020
Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment
Gaining insight into the potential negative impacts of emerging Artificial
Intelligence (AI) technologies in society is a challenge for implementing
anticipatory governance approaches. One approach to produce such insight is to
use Large Language Models (LLMs) to support and guide experts in the process of
ideating and exploring the range of undesirable consequences of emerging
technologies. However, performance evaluations of LLMs for such tasks are still
needed, including examining the general quality of generated impacts but also
the range of types of impacts produced and resulting biases. In this paper, we
demonstrate the potential for generating high-quality and diverse impacts of AI
in society by fine-tuning completion models (GPT-3 and Mistral-7B) on a diverse
sample of articles from news media and comparing those outputs to the impacts
generated by instruction-based (GPT-4 and Mistral-7B-Instruct) models. We
examine the generated impacts for coherence, structure, relevance, and
plausibility and find that the generated impacts using Mistral-7B, a small
open-source model fine-tuned on impacts from the news media, tend to be
qualitatively on par with impacts generated using a more capable and larger
scale model such as GPT-4. Moreover, we find that impacts produced by
instruction-based models had gaps in the production of certain categories of
impacts in comparison to fine-tuned models. This research highlights a
potential bias in the range of impacts generated by state-of-the-art LLMs and
the potential of aligning smaller LLMs on news media as a scalable alternative
to generate high quality and more diverse impacts in support of anticipatory
governance approaches.Comment: 10 pages + research ethics and social impact statement, references,
and appendix. Under conference revie
Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making
Social media platforms have increasingly become an important way for news organizations to distribute content to their audiences. As news organizations relinquish control over distribution, they may feel the need to optimize their content to align with platform logics to ensure economic sustainability. However, the opaque and often proprietary nature of platform algorithms makes it hard for news organizations to truly know what kinds of content are preferred and will perform well. Invoking the concept of algorithmic ‘folk theories,’ this article presents a study of in-depth, semi-structured interviews with 18 U.S.-based news journalists and editors to understand how they make sense of social media algorithms, and to what extent this influences editorial decision making. Our findings suggest that while journalists’ understandings of platform algorithms create new considerations for gatekeeping practices, the extent to which it influences those practices is often negotiated against traditional journalistic conceptions of newsworthiness and journalistic autonomy
Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism
Science and technology journalists today face challenges in finding
newsworthy leads due to increased workloads, reduced resources, and expanding
scientific publishing ecosystems. Given this context, we explore computational
methods to aid these journalists' news discovery in terms of time-efficiency
and agency. In particular, we prototyped three computational information
subsidies into an interactive tool that we used as a probe to better understand
how such a tool may offer utility or more broadly shape the practices of
professional science journalists. Our findings highlight central considerations
around science journalists' agency, context, and responsibilities that such
tools can influence and could account for in design. Based on this, we suggest
design opportunities for greater and longer-term user agency; incorporating
contextual, personal and collaborative notions of newsworthiness; and
leveraging flexible interfaces and generative models. Overall, our findings
contribute a richer view of the sociotechnical system around computational news
discovery tools, and suggest ways to improve such tools to better support the
practices of science journalists.Comment: To be published in CSCW 202
Anticipating Impacts: Using Large-Scale Scenario Writing to Explore Diverse Implications of Generative AI in the News Environment
The tremendous rise of generative AI has reached every part of society -
including the news environment. There are many concerns about the individual
and societal impact of the increasing use of generative AI, including issues
such as disinformation and misinformation, discrimination, and the promotion of
social tensions. However, research on anticipating the impact of generative AI
is still in its infancy and mostly limited to the views of technology
developers and/or researchers. In this paper, we aim to broaden the perspective
and capture the expectations of three stakeholder groups (news consumers;
technology developers; content creators) about the potential negative impacts
of generative AI, as well as mitigation strategies to address these.
Methodologically, we apply scenario writing and use participatory foresight in
the context of a survey (n=119) to delve into cognitively diverse imaginations
of the future. We qualitatively analyze the scenarios using thematic analysis
to systematically map potential impacts of generative AI on the news
environment, potential mitigation strategies, and the role of stakeholders in
causing and mitigating these impacts. In addition, we measure respondents'
opinions on a specific mitigation strategy, namely transparency obligations as
suggested in Article 52 of the draft EU AI Act. We compare the results across
different stakeholder groups and elaborate on the (non-) presence of different
expected impacts across these groups. We conclude by discussing the usefulness
of scenario-writing and participatory foresight as a toolbox for generative AI
impact assessment
- …
