465 research outputs found
Obvious strategyproofness needs monitoring for good approximations (extended abstract)
Obvious strategyproofness (OSP) is an appealing concept as it allows to maintain incentive compatibility even in the presence of agents that are not fully rational, e.g., those who struggle with contingent reasoning [10]. However, it has been shown to impose some limitations, e.g., no OSP mechanism can return a stable matching [3] . We here deepen the study of the limitations of OSP mechanisms by look-ing at their approximation guarantees for basic optimization problems paradigmatic of the area, i.e., machine scheduling and facility location. We prove a number of bounds on the approximation guarantee of OSP mechanisms, which show that OSP can come at a signifificant cost. How-ever, rather surprisingly, we prove that OSP mechanisms can return opti-mal solutions when they use monitoring|a mechanism design paradigm that introduces a mild level of scrutiny on agents' declarations [9]
Performance evaluation of an open distributed platform for realistic traffic generation
Network researchers have dedicated a notable part of their efforts
to the area of modeling traffic and to the implementation of efficient traffic
generators. We feel that there is a strong demand for traffic generators
capable to reproduce realistic traffic patterns according to theoretical
models and at the same time with high performance. This work presents an open
distributed platform for traffic generation that we called distributed
internet traffic generator (D-ITG), capable of producing traffic (network,
transport and application layer) at packet level and of accurately replicating
appropriate stochastic processes for both inter departure time (IDT) and
packet size (PS) random variables. We implemented two different versions of
our distributed generator. In the first one, a log server is in charge of
recording the information transmitted by senders and receivers and these
communications are based either on TCP or UDP. In the other one, senders and
receivers make use of the MPI library. In this work a complete performance
comparison among the centralized version and the two distributed versions of
D-ITG is presented
The Power of Verification for Greedy Mechanism Design
Greedy algorithms are known to provide near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders, ignoring incentive compatibility. Borodin and Lucier [5] however proved that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subsets of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary for the greedy algorithm of [20], which is 2-approximate for submodular CAs, to become truthful with money in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph
Obvious strategyproofness needs monitoring for good approximations
Obvious strategyproofness (OSP) is an appealing concept as it allows to maintain incentive compatibility even in the presence of agents that are not fully rational, e.g., those who struggle with contingent reasoning (Li 2015). However, it has been shown to impose some limitations, e.g., no OSP mechanism can return a stable matching (Ashlagi and Gonczarowski 2015). We here deepen the study of the limitations of OSP mechanisms by looking at their approximation guarantees for basic optimization problems paradigmatic of the area, i.e., machine scheduling and facility location. We prove a number of bounds on the approximation guarantee of OSP mechanisms, which show that OSP can come at a significant cost. However, rather surprisingly, we prove that OSP mechanisms can return optimal solutions when they use monitoring?a novel mechanism design paradigm that introduces a mild level of scrutiny on agents? declarations (Kovács, Meyer, and Ventre 2015)
Social Pressure in Opinion Games
Motivated by privacy and security concerns in online social networks, we study the role of social pressure in opinion games. These are games, important in economics and sociology, that model the formation of opinions in a social network. We enrich the definition of (noisy) best-response dynamics for opinion games by introducing the pressure, increasing with time, to reach an agreement. We prove that for clique social networks, the dynamics always converges to consensus (no matter the level of noise) if the social pressure is high enough. Moreover, we provide (tight) bounds on the speed of convergence; these bounds are polynomial in the number of players provided that the pressure grows sufficiently fast. We finally look beyond cliques: we characterize the graphs for which consensus is guaranteed, and make some considerations on the computational complexity of checking whether a graph satisfies such a condition
Making Sigma-Protocols Non-interactive Without Random Oracles
Damg˚ard, Fazio and Nicolosi (TCC 2006) gave a transformation of Sigma-protocols, 3-move honest verifier zero-knowledge proofs, into efficient non-interactive zero-knowledge arguments for a designated verifier. Their transformation uses additively homomorphic encryption
to encrypt the verifier’s challenge, which the prover uses to compute an encrypted answer. The transformation does not rely on the random oracle model but proving soundness requires a complexity leveraging assumption.
We propose an alternative instantiation of their transformation and show that it achieves culpable soundness without complexity leveraging. This
improves upon an earlier result by Ventre and Visconti (Africacrypt 2009), who used a different construction which achieved weak culpable soundness.
We demonstrate how our construction can be used to prove validity of encrypted votes in a referendum. This yields a voting system with homomorphic tallying that does not rely on the Fiat-Shamir heuristic
Clearing Financial Networks with Derivatives: From Intractability to Algorithms
Financial networks raise a significant computational challenge in identifying
insolvent firms and evaluating their exposure to systemic risk. This task,
known as the clearing problem, is computationally tractable when dealing with
simple debt contracts. However under the presence of certain derivatives called
credit default swaps (CDSes) the clearing problem is -complete.
Existing techniques only show -hardness for finding an
-solution for the clearing problem with CDSes within an unspecified
small range for .
We present significant progress in both facets of the clearing problem: (i)
intractability of approximate solutions; (ii) algorithms and heuristics for
computable solutions. Leveraging (FOCS'22), we provide
the first explicit inapproximability bound for the clearing problem involving
CDSes. Our primal contribution is a reduction from
which establishes that finding approximate solutions is -hard
within a range of roughly 5%.
To alleviate the complexity of the clearing problem, we identify two
meaningful restrictions of the class of financial networks motivated by
regulations: (i) the presence of a central clearing authority; and (ii) the
restriction to covered CDSes. We provide the following results: (i.) The
-hardness of approximation persists when central clearing
authorities are introduced; (ii.) An optimisation-based method for solving the
clearing problem with central clearing authorities; (iii.) A polynomial-time
algorithm when the two restrictions hold simultaneously
Bronchiolitis: an update on management and prophylaxis.
Bronchiolitis is an acute respiratory illness that is the leading cause of hospitalization in young children less than 2 years of age in the UK. Respiratory syncytial virus is the most common virus associated with bronchiolitis and has the highest disease severity, mortality and cost. Bronchiolitis is generally a self-limiting condition, but can have serious consequences in infants who are very young, premature, or have underlying comorbidities. Management of bronchiolitis in the UK is guided by the National Institute for Health and Care Excellence (2015) guidance. The mainstays of management are largely supportive, consisting of fluid management and respiratory support. Pharmacological interventions including nebulized bronchodilators, steroids and antibiotics generally have limited or no evidence of efficacy and are not advised by National Institute of Health and Care Excellence. Antiviral therapeutics remain in development. As treatments are limited, there have been extensive efforts to develop vaccines, mainly targeting respiratory syncytial virus. At present, the only licensed product is a monoclonal antibody for passive immunisation. Its cost restricts its use to those at highest risk. Vaccines for active immunisation of pregnant women and young infants are also being developed
A practical and catalyst-free trifluoroethylation reaction of amines using trifluoroacetic acid
Amines are a fundamentally important class of biologically active compounds and the ability to manipulate their physicochemical properties through the introduction of fluorine is of paramount importance in medicinal chemistry. Current synthesis methods for the construction of fluorinated amines rely on air and moisture sensitive reagents that require special handling or harsh reductants that limit functionality. Here we report practical, catalyst-free, reductive trifluoroethylation reactions of free amines exhibiting remarkable functional group tolerance. The reactions proceed in conventional glassware without rigorous exclusion of either moisture or oxygen, and use trifluoroacetic acid as a stable and inexpensive fluorine source. The new methods provide access to a wide range of medicinally-relevant functionalized tertiary beta-fluoroalkylamine cores, either through direct trifluoroethylation of secondary amines or via a three-component coupling of primary amines, aldehydes and trifluoroacetic acid. A reduction of in situ-generated silyl ester species is proposed to account for the reductive selectivity observed
The Power of Verification for Greedy Mechanism Design
Greedy algorithms are known to provide, in polynomial time, near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders. It is known that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees.
In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subset of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary to obtain a 2-approximate truthful mechanism with money, based on a known greedy algorithm, for the problem of submodular CAs in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph
- …
