191 research outputs found
Should The Stars In Your Service Flag Turn To Gold
https://digitalcommons.library.umaine.edu/mmb-vp/5519/thumbnail.jp
Land Market Valuation of Groundwater
We estimate irrigation premiums and implicit marginal valuations of water in-storage using parcel-level transaction data for land sales in the Kansas portion of the High Plains Aquifer. We find that agricultural land values are 53% higher for irrigated parcels than non-irrigated parcels on average and that the irrigation premium has increased at an average rate of 1.0 percentage points per year over the sample period (1988–2015). Spatial heterogeneity in irrigation premiums is explained by differences in saturated thickness of the aquifer. Water in-storage is capitalized into land prices at average marginal values ranging from 15.86/acre-ft
Battling the race: Stylizing language and coproducing whiteness and colouredness in a freestyle rap performance
In the last 19 years of post-apartheid South African democracy, race remains an enduring and
familiar trope, a point of certainty amid the messy ambiguities of transformation. In the
present article, we explore the malleable, permeable, and unstable racializations of contemporary
South Arica, specifically the way in which coloured and white racializations are negotiated
and interactionally accomplished in the context of Capetonian hip-hop. The analysis
reveals the complex ways in which racialized bodies are figured semiotically through reference
to historical time and contemporary (translocal) social space. But also the way iconic features
of blackness are reindexicalized to stand for a transnational whiteness
The state of the Martian climate
60°N was +2.0°C, relative to the 1981–2010 average value (Fig. 5.1). This marks a new high for the record. The average annual surface air temperature (SAT) anomaly for 2016 for land stations north of starting in 1900, and is a significant increase over the previous highest value of +1.2°C, which was observed in 2007, 2011, and 2015. Average global annual temperatures also showed record values in 2015 and 2016. Currently, the Arctic is warming at more than twice the rate of lower latitudes
Representation in AI evaluations
Calls for representation in artificial intelligence (AI) and machine learning (ML) are widespread, with "representation"or "representativeness"generally understood to be both an instrumentally and intrinsically beneficial quality of an AI system, and central to fairness concerns. But what does it mean for an AI system to be "representative"? Each element of the AI lifecycle is geared towards its own goals and effect on the system, therefore requiring its own analyses with regard to what kind of representation is best. In this work we untangle the benefits of representation in AI evaluations to develop a framework to guide an AI practitioner or auditor towards the creation of representative ML evaluations. Representation, however, is not a panacea. We further lay out the limitations and tensions of instrumentally representative datasets, such as the necessity of data existence and access, surveillance vs expectations of privacy, implications for foundation models and power. This work sets the stage for a research agenda on representation in AI, which extends beyond instrumentally valuable representation in evaluations towards refocusing on, and empowering, impacted communities
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Large language models produce human-like text that drive a growing number of
applications. However, recent literature and, increasingly, real world
observations, have demonstrated that these models can generate language that is
toxic, biased, untruthful or otherwise harmful. Though work to evaluate
language model harms is under way, translating foresight about which harms may
arise into rigorous benchmarks is not straightforward. To facilitate this
translation, we outline six ways of characterizing harmful text which merit
explicit consideration when designing new benchmarks. We then use these
characteristics as a lens to identify trends and gaps in existing benchmarks.
Finally, we apply them in a case study of the Perspective API, a toxicity
classifier that is widely used in harm benchmarks. Our characteristics provide
one piece of the bridge that translates between foresight and effective
evaluation.Comment: Accepted to NeurIPS 2022 Datasets and Benchmarks Track; 10 pages plus
appendi
Holistic Safety and Responsibility Evaluations of Advanced AI Models
Safety and responsibility evaluations of advanced AI models are a critical
but developing field of research and practice. In the development of Google
DeepMind's advanced AI models, we innovated on and applied a broad set of
approaches to safety evaluation. In this report, we summarise and share
elements of our evolving approach as well as lessons learned for a broad
audience. Key lessons learned include: First, theoretical underpinnings and
frameworks are invaluable to organise the breadth of risk domains, modalities,
forms, metrics, and goals. Second, theory and practice of safety evaluation
development each benefit from collaboration to clarify goals, methods and
challenges, and facilitate the transfer of insights between different
stakeholders and disciplines. Third, similar key methods, lessons, and
institutions apply across the range of concerns in responsibility and safety -
including established and emerging harms. For this reason it is important that
a wide range of actors working on safety evaluation and safety research
communities work together to develop, refine and implement novel evaluation
approaches and best practices, rather than operating in silos. The report
concludes with outlining the clear need to rapidly advance the science of
evaluations, to integrate new evaluations into the development and governance
of AI, to establish scientifically-grounded norms and standards, and to promote
a robust evaluation ecosystem.Comment: 10 pages excluding bibliograph
Taxonomy of risks posed by language models
Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly
- …
