117 research outputs found

    Planning with Multiple Biases

    Full text link
    Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future. We propose a theoretical model for planning with this pair of biases, and we show how certain natural behavioral phenomena can arise in our model only when agents exhibit both biases. As part of our model we differentiate between agents that are aware of their biases (sophisticated) and agents that are unaware of them (naive). Interestingly, we show that the interaction between the two biases is quite complex: in some cases, they mitigate each other's effects while in other cases they might amplify each other. We obtain a number of further results as well, including the fact that the planning problem in our model for an agent experiencing and aware of both biases is computationally hard in general, though tractable under more relaxed assumptions

    Planning Problems for Sophisticated Agents with Present Bias

    Full text link
    Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that the agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life. Models of sophisticated behavior have lacked an underlying formalism that allows one to reason over the full space of multi-step tasks that a sophisticated agent might face. This has made it correspondingly difficult to make comparative or worst-case statements about the performance of sophisticated agents in arbitrary scenarios. In this paper, we incorporate the notion of sophistication into a graph-theoretic model that we used in recent work for modeling naive agents. This new synthesis of two formalisms - sophistication and graph-theoretic planning - uncovers a rich structure that wasn't apparent in the earlier behavioral economics work on this problem. In particular, our graph-theoretic model makes two kinds of new results possible. First, we give tight worst-case bounds on the performance of sophisticated agents in arbitrary multi-step tasks relative to the optimal plan. Second, the flexibility of our formalism makes it possible to identify new phenomena that had not been seen in prior literature: these include a surprising non-monotonic property in the use of rewards to motivate sophisticated agents and a framework for reasoning about commitment devices

    Selection Problems in the Presence of Implicit Bias

    Get PDF
    Over the past two decades, the notion of implicit bias has come to serve as an important com- ponent in our understanding of bias and discrimination in activities such as hiring, promotion, and school admissions. Research on implicit bias posits that when people evaluate others - for example, in a hiring context - their unconscious biases about membership in particular demo- graphic groups can have an effect on their decision-making, even when they have no deliberate intention to discriminate against members of these groups. A growing body of experimental work has demonstrated the effect that implicit bias can have in producing adverse outcomes. Here we propose a theoretical model for studying the effects of implicit bias on selection decisions, and a way of analyzing possible procedural remedies for implicit bias within this model. A canonical situation represented by our model is a hiring setting, in which recruiters are trying to evaluate the future potential of job applicants, but their estimates of potential are skewed by an unconscious bias against members of one group. In this model, we show that measures such as the Rooney Rule, a requirement that at least one member of an underrepresented group be selected, can not only improve the representation of the affected group, but also lead to higher payoffs in absolute terms for the organization performing the recruiting. However, identifying the conditions under which such measures can lead to improved payoffs involves subtle trade- offs between the extent of the bias and the underlying distribution of applicant characteristics, leading to novel theoretical questions about order statistics in the presence of probabilistic side information

    Limitations of the “Four-Fifths Rule” and Statistical Parity Tests for Measuring Fairness

    Get PDF
    To ensure the fairness of algorithmic decision systems, such as employment selection tools, computer scientists and practitioners often refer to the so-called “four-fifths rule” as a measure of a tool’s compliance with anti-discrimination law. This reliance is problematic because the “rule” is in fact not a legal rule for establishing discrimination, and it offers a crude test that will often be over- and under-inclusive in identifying practices that warrant further scrutiny. The “four-fifths rule” is one of a broader class of statistical tests, which we call Statistical Parity Tests (SPTs), that compare selection rates across demographic groups. While some SPTs are more statistically robust, all share some critical limitations in identifying disparate impacts retrospectively. When these tests are used prospectively as an optimization objective shaping model development, additional concerns arise about the development process, behavioral incentives, and gameability. In this Article, we discuss the appropriate role for SPTs in algorithmic governance. We suggest a combination of measures that take advantage of the additional information present durin

    The Right to be an Exception to a Data-Driven Rule

    Full text link
    Data-driven tools are increasingly used to make consequential decisions. They have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. In such settings, different data-driven rules result in different decisions. The problem is: to every data-driven rule, there are exceptions. While a data-driven rule may be appropriate for some, it may not be appropriate for all. As data-driven decisions become more common, there are cases in which it becomes necessary to protect the individuals who, through no fault of their own, are the data-driven exceptions. At the same time, it is impossible to scrutinize every one of the increasing number of data-driven decisions, begging the question: When and how should data-driven exceptions be protected? In this piece, we argue that individuals have the right to be an exception to a data-driven rule. That is, the presumption should not be that a data-driven rule--even one with high accuracy--is suitable for an arbitrary decision-subject of interest. Rather, a decision-maker should apply the rule only if they have exercised due care and due diligence (relative to the risk of harm) in excluding the possibility that the decision-subject is an exception to the data-driven rule. In some cases, the risk of harm may be so low that only cursory consideration is required. Although applying due care and due diligence is meaningful in human-driven decision contexts, it is unclear what it means for a data-driven rule to do so. We propose that determining whether a data-driven rule is suitable for a given decision-subject requires the consideration of three factors: individualization, uncertainty, and harm. We unpack this right in detail, providing a framework for assessing data-driven rules and describing what it would mean to invoke the right in practice.Comment: 22 pages, 0 figure

    Mapping the Invocation Structure of Online Political Interaction

    Full text link
    The surge in political information, discourse, and interaction has been one of the most important developments in social media over the past several years. There is rich structure in the interaction among different viewpoints on the ideological spectrum. However, we still have only a limited analytical vocabulary for expressing the ways in which these viewpoints interact. In this paper, we develop network-based methods that operate on the ways in which users share content; we construct \emph{invocation graphs} on Web domains showing the extent to which pages from one domain are invoked by users to reply to posts containing pages from other domains. When we locate the domains on a political spectrum induced from the data, we obtain an embedded graph showing how these interaction links span different distances on the spectrum. The structure of this embedded network, and its evolution over time, helps us derive macro-level insights about how political interaction unfolded through 2016, leading up to the US Presidential election. In particular, we find that the domains invoked in replies spanned increasing distances on the spectrum over the months approaching the election, and that there was clear asymmetry between the left-to-right and right-to-left patterns of linkage.Comment: The Web Conference 2018 (WWW 2018

    Inherent Trade-Offs in the Fair Determination of Risk Scores

    Get PDF
    Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them
    corecore