116 research outputs found

    When Are Welfare Guarantees Robust?

    Get PDF
    Computational and economic results suggest that social welfare maximization and combinatorial auction design are much easier when bidders\u27 valuations satisfy the "gross substitutes" condition. The goal of this paper is to evaluate rigorously the folklore belief that the main take-aways from these results remain valid in settings where the gross substitutes condition holds only approximately. We show that for valuations that pointwise approximate a gross substitutes valuation (in fact even a linear valuation), optimal social welfare cannot be approximated to within a subpolynomial factor and demand oracles cannot be simulated using a subexponential number of value queries. We then provide several positive results by imposing additional structure on the valuations (beyond gross substitutes), using a more stringent notion of approximation, and/or using more powerful oracle access to the valuations. For example, we prove that the performance of the greedy algorithm degrades gracefully for near-linear valuations with approximately decreasing marginal values; that with demand queries, approximate welfare guarantees for XOS valuations degrade gracefully for valuations that are pointwise close to XOS; and that the performance of the Kelso-Crawford auction degrades gracefully for valuations that are close to various subclasses of gross substitutes valuations

    Oblivious Rounding and the Integrality Gap

    Get PDF
    The following paradigm is often used for handling NP-hard combinatorial optimization problems. One first formulates the problem as an integer program, then one relaxes it to a linear program (LP, or more generally, a convex program), then one solves the LP relaxation in polynomial time, and finally one rounds the optimal LP solution, obtaining a feasible solution to the original problem. Many of the commonly used rounding schemes (such as randomized rounding, threshold rounding and others) are "oblivious" in the sense that the rounding is performed based on the LP solution alone, disregarding the objective function. The goal of our work is to better understand in which cases oblivious rounding suffices in order to obtain approximation ratios that match the integrality gap of the underlying LP. Our study is information theoretic - the rounding is restricted to be oblivious but not restricted to run in polynomial time. In this information theoretic setting we characterize the approximation ratio achievable by oblivious rounding. It turns out to equal the integrality gap of the underlying LP on a problem that is the closure of the original combinatorial optimization problem. We apply our findings to the study of the approximation ratios obtainable by oblivious rounding for the maximum welfare problem, showing that when valuation functions are submodular oblivious rounding can match the integrality gap of the configuration LP (though we do not know what this integrality gap is), but when valuation functions are gross substitutes oblivious rounding cannot match the integrality gap (which is 1)

    Vertex Sparsifiers: New Results from Old Techniques

    Get PDF
    Given a capacitated graph G=(V,E)G = (V,E) and a set of terminals KVK \subseteq V, how should we produce a graph HH only on the terminals KK so that every (multicommodity) flow between the terminals in GG could be supported in HH with low congestion, and vice versa? (Such a graph HH is called a flow-sparsifier for GG.) What if we want HH to be a "simple" graph? What if we allow HH to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier HH that maintains congestion up to a factor of O(logk/loglogk)O(\log k/\log \log k), where k=Kk = |K|, (b) a convex combination of trees over the terminals KK that maintains congestion up to a factor of O(logk)O(\log k), and (c) for a planar graph GG, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in GG. Moreover, this result extends to minor-closed families of graphs. Our improved bounds immediately imply improved approximation guarantees for several terminal-based cut and ordering problems.Comment: An extended abstract appears in the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2010. Final version to appear in SIAM J. Computin

    Bayesian Analysis of Linear Contracts

    Full text link
    We provide a justification for the prevalence of linear (commission-based) contracts in practice under the Bayesian framework. We consider a hidden-action principal-agent model, in which actions require different amounts of effort, and the agent's cost per-unit-of-effort is private. We show that linear contracts are near-optimal whenever there is sufficient uncertainty in the principal-agent setting

    Incentivizing Quality Text Generation via Statistical Contracts

    Full text link
    While the success of large language models (LLMs) increases demand for machine-generated text, current pay-per-token pricing schemes create a misalignment of incentives known in economics as moral hazard: Text-generating agents have strong incentive to cut costs by preferring a cheaper model over the cutting-edge one, and this can be done "behind the scenes" since the agent performs inference internally. In this work, we approach this issue from an economic perspective, by proposing a pay-for-performance, contract-based framework for incentivizing quality. We study a principal-agent game where the agent generates text using costly inference, and the contract determines the principal's payment for the text according to an automated quality evaluation. Since standard contract theory is inapplicable when internal inference costs are unknown, we introduce cost-robust contracts. As our main theoretical contribution, we characterize optimal cost-robust contracts through a direct correspondence to optimal composite hypothesis tests from statistics, generalizing a result of Saig et al. (NeurIPS'23). We evaluate our framework empirically by deriving contracts for a range of objectives and LLM evaluation benchmarks, and find that cost-robust contracts sacrifice only a marginal increase in objective value compared to their cost-aware counterparts.Comment: Comments are welcom

    Multi-Channel Bayesian Persuasion

    Get PDF
    The celebrated Bayesian persuasion model considers strategic communication between an informed agent (the sender) and uninformed decision makers (the receivers). The current rapidly-growing literature mostly assumes a dichotomy: either the sender is powerful enough to communicate separately with each receiver (a.k.a. private persuasion), or she cannot communicate separately at all (a.k.a. public persuasion). We study a model that smoothly interpolates between the two, by considering a natural multi-channel communication structure in which each receiver observes a subset of the sender's communication channels. This captures, e.g., receivers on a network, where information spillover is almost inevitable. We completely characterize when one communication structure is better for the sender than another, in the sense of yielding higher optimal expected utility universally over all prior distributions and utility functions. The characterization is based on a simple pairwise relation among receivers - one receiver information-dominates another if he observes at least the same channels. We prove that a communication structure M1M_1 is (weakly) better than M2M_2 if and only if every information-dominating pair of receivers in M1M_1 is also such in M2M_2. We also provide an additive FPTAS for the optimal sender's signaling scheme when the number of states is constant and the graph of information-dominating pairs is a directed forest. Finally, we prove that finding an optimal signaling scheme under multi-channel persuasion is, generally, computationally harder than under both public and private persuasion

    Algorithmic Cheap Talk

    Full text link
    The literature on strategic communication originated with the influential cheap talk model, which precedes the Bayesian persuasion model by three decades. This model describes an interaction between two agents: sender and receiver. The sender knows some state of the world which the receiver does not know, and tries to influence the receiver's action by communicating a cheap talk message to the receiver. This paper initiates the algorithmic study of cheap talk in a finite environment (i.e., a finite number of states and receiver's possible actions). We first prove that approximating the sender-optimal or the welfare-maximizing cheap talk equilibrium up to a certain additive constant or multiplicative factor is NP-hard. Fortunately, we identify three naturally-restricted cases that admit efficient algorithms for finding a sender-optimal equilibrium. These include a state-independent sender's utility structure, a constant number of states or a receiver having only two actions
    corecore