856 research outputs found
A Damage-Revelation Rationale for Coupon Remedies
This article studies optimal remedies in a setting in which damages vary among plaintiffs and are difficult to determine. We show that giving plaintiffs a choice between cash and coupons to purchase units of the defendant's product at a discount -- a "coupon-cash remedy" -- is superior to cash alone. The optimal coupon-cash remedy offers a cash amount that is less than the value of the coupons to plaintiffs who suffer relatively high harm. Such a remedy induces these plaintiffs to choose coupons, and plaintiffs who suffer relatively low harm to choose cash. Sorting plaintiffs in this way leads to better deterrence because the costs borne by defendants (the cash payments and the cost of providing coupons) more closely approximate the harms that they have caused.
Remedies for Price Overcharges: The Deadweight Loss of Coupons and Discounts
This article evaluates two different remedies for consumers who have been injured by a price overcharge on the sale of a good. Under a coupon remedy, injured consumers are awarded coupons that can be used for a limited period of time to purchase the good at a price below that which prevails after the overcharge has been eliminated, that is, below the competitive price. Under a discount remedy, any consumer, without proof of injury, may purchase the good for a limited period of time at a price that is set below the competitive price. Both remedies generally cause consumers to buy an excessive amount of the good during the remedy period. Under the coupon remedy only a subset of consumers are affected in this way (those holding a relatively high number of coupons), while under the discount remedy all consumers are affected. We show nonetheless that the resulting deadweight loss could be lower under the discount remedy. We also consider how the deadweight loss changes when the length of the remedy period is increased by extending the expiration date for the use of coupons or by employing a lower discount for a longer period of time. The deadweight loss may or may not decline under the coupon remedy, though it does decline under the discount remedy. In neither case, however, does it go to zero in the limit.
The Welfare Implications of Costly Litigation in the Theory of Liability
One of the principal results in the economic theory of liability is that, assuming litigation is costless, the rule of strict liability with compensatory damages leads the injurer to choose the socially appropriate level of care. This paper reexamines this result when litigation is costly. It is shown that strict liability with compensatory damages generally leads to a socially inappropriate level of care and to excessive litigation costs. Social welfare can be increased by adjusting compensatory damages upward or downward, with the desired direction depending on the effect of changes in the level of liability on the injurer's decision to take care and on the victim's decision to bring suit.
Optimal Awards and Penalties when the Probability of Prevailing Varies Among Plaintiffs
This article derives the optimal award to a winning plaintiff and the optimal penalty on a losing plaintiff when the probability of prevailing varies among plaintiffs. Optimality is defined in terms of achieving a specified degree of deterrence of potential injurers with the lowest litigation cost. Our main result is that the optimal penalty on a losing plaintiff is positive, in contrast to common practice in the United States. By penalizing losing plaintiffs and raising the award to winning plaintiffs (relative to what it would be if losing plaintiffs were not penalized), it is possible to discourage relatively low-probability-of-prevailing plaintiffs from suing without discouraging relatively high-probability plaintiffs, and thereby to achieve the desired degee of deterrence with lower litigation costs. This result is developed first in a model in which all suits are assumed to go to trial and then in a model in which settlements are possible.
Remedies for Price Overcharges: The Deadweight Loss of Coupons and Discounts
This article evaluates two different remedies for consumers who have been injured by a price overcharge on the sale of a good. Under a coupon remedy, injured consumers are awarded coupons that can be used for a limited period of time to purchase the good at a price below that which prevails after the overcharge has been eliminated, that is, below the competitive price. Under a discount remedy, any consumer, without proof of injury, may purchase the good for a limited period of time at a price that is set below the competitive price. Both remedies generally cause consumers to buy an excessive amount of the good during the remedy period. Under the coupon remedy only a subset of consumers are affected in this way (those holding a relatively high number of coupons), while under the discount remedy all consumers are affected. We show nonetheless that the resulting deadweight loss could be lower under the discount remedy. We also consider how the deadweight loss changes when the length of the remedy period is increased.
Quantum Algorithms for Learning and Testing Juntas
In this article we develop quantum algorithms for learning and testing
juntas, i.e. Boolean functions which depend only on an unknown set of k out of
n input variables. Our aim is to develop efficient algorithms:
- whose sample complexity has no dependence on n, the dimension of the domain
the Boolean functions are defined over;
- with no access to any classical or quantum membership ("black-box")
queries. Instead, our algorithms use only classical examples generated
uniformly at random and fixed quantum superpositions of such classical
examples;
- which require only a few quantum examples but possibly many classical
random examples (which are considered quite "cheap" relative to quantum
examples).
Our quantum algorithms are based on a subroutine FS which enables sampling
according to the Fourier spectrum of f; the FS subroutine was used in earlier
work of Bshouty and Jackson on quantum learning. Our results are as follows:
- We give an algorithm for testing k-juntas to accuracy that uses
quantum examples. This improves on the number of examples used
by the best known classical algorithm.
- We establish the following lower bound: any FS-based k-junta testing
algorithm requires queries.
- We give an algorithm for learning -juntas to accuracy that
uses quantum examples and
random examples. We show that this learning algorithms is close to optimal by
giving a related lower bound.Comment: 15 pages, 1 figure. Uses synttree package. To appear in Quantum
Information Processin
Testing probability distributions underlying aggregated data
In this paper, we analyze and study a hybrid model for testing and learning
probability distributions. Here, in addition to samples, the testing algorithm
is provided with one of two different types of oracles to the unknown
distribution over . More precisely, we define both the dual and
cumulative dual access models, in which the algorithm can both sample from
and respectively, for any ,
- query the probability mass (query access); or
- get the total mass of , i.e. (cumulative
access)
These two models, by generalizing the previously studied sampling and query
oracle models, allow us to bypass the strong lower bounds established for a
number of problems in these settings, while capturing several interesting
aspects of these problems -- and providing new insight on the limitations of
the models. Finally, we show that while the testing algorithms can be in most
cases strictly more efficient, some tasks remain hard even with this additional
power
Sublinear-Time Algorithms for Monomer-Dimer Systems on Bounded Degree Graphs
For a graph , let be the partition function of the
monomer-dimer system defined by , where is the
number of matchings of size in . We consider graphs of bounded degree
and develop a sublinear-time algorithm for estimating at an
arbitrary value within additive error with high
probability. The query complexity of our algorithm does not depend on the size
of and is polynomial in , and we also provide a lower bound
quadratic in for this problem. This is the first analysis of a
sublinear-time approximation algorithm for a # P-complete problem. Our
approach is based on the correlation decay of the Gibbs distribution associated
with . We show that our algorithm approximates the probability
for a vertex to be covered by a matching, sampled according to this Gibbs
distribution, in a near-optimal sublinear time. We extend our results to
approximate the average size and the entropy of such a matching within an
additive error with high probability, where again the query complexity is
polynomial in and the lower bound is quadratic in .
Our algorithms are simple to implement and of practical use when dealing with
massive datasets. Our results extend to other systems where the correlation
decay is known to hold as for the independent set problem up to the critical
activity
- …
