1,745 research outputs found
Modeling self-organization of communication and topology in social networks
This paper introduces a model of self-organization between communication and
topology in social networks, with a feedback between different communication
habits and the topology. To study this feedback, we let agents communicate to
build a perception of a network and use this information to create strategic
links. We observe a narrow distribution of links when the communication is low
and a system with a broad distribution of links when the communication is high.
We also analyze the outcome of chatting, cheating, and lying, as strategies to
get better access to information in the network. Chatting, although only
adopted by a few agents, gives a global gain in the system. Contrary, a global
loss is inevitable in a system with too many liarsComment: 6 pages 7 figures, Java simulation available at
http://cmol.nbi.dk/models/inforew/inforew.htm
Analysis of a continuous-time model of structural balance
It is not uncommon for certain social networks to divide into two opposing
camps in response to stress. This happens, for example, in networks of
political parties during winner-takes-all elections, in networks of companies
competing to establish technical standards, and in networks of nations faced
with mounting threats of war. A simple model for these two-sided separations is
the dynamical system dX/dt = X^2 where X is a matrix of the friendliness or
unfriendliness between pairs of nodes in the network. Previous simulations
suggested that only two types of behavior were possible for this system: either
all relationships become friendly, or two hostile factions emerge. Here we
prove that for generic initial conditions, these are indeed the only possible
outcomes. Our analysis yields a closed-form expression for faction membership
as a function of the initial conditions, and implies that the initial amount of
friendliness in large social networks (started from random initial conditions)
determines whether they will end up in intractable conflict or global harmony.Comment: 12 pages, 2 figure
Truthful Mechanisms with Implicit Payment Computation
It is widely believed that computing payments needed to induce truthful
bidding is somehow harder than simply computing the allocation. We show that
the opposite is true: creating a randomized truthful mechanism is essentially
as easy as a single call to a monotone allocation rule. Our main result is a
general procedure to take a monotone allocation rule for a single-parameter
domain and transform it (via a black-box reduction) into a randomized mechanism
that is truthful in expectation and individually rational for every
realization. The mechanism implements the same outcome as the original
allocation rule with probability arbitrarily close to 1, and requires
evaluating that allocation rule only once. We also provide an extension of this
result to multi-parameter domains and cycle-monotone allocation rules, under
mild star-convexity and non-negativity hypotheses on the type space and
allocation rule, respectively.
Because our reduction is simple, versatile, and general, it has many
applications to mechanism design problems in which re-evaluating the allocation
rule is either burdensome or informationally impossible. Applying our result to
the multi-armed bandit problem, we obtain truthful randomized mechanisms whose
regret matches the information-theoretic lower bound up to logarithmic factors,
even though prior work showed this is impossible for truthful deterministic
mechanisms. We also present applications to offline mechanism design, showing
that randomization can circumvent a communication complexity lower bound for
deterministic payments computation, and that it can also be used to create
truthful shortest path auctions that approximate the welfare of the VCG
allocation arbitrarily well, while having the same running time complexity as
Dijkstra's algorithm.Comment: This is a full version of the conference paper from ACM EC 2010,
merged with a multi-parameter extension (Section 8) from the follow-up paper
in ACM EC 2013 by the same authors. Apart from the revised presentation, this
version is updated to reflect the follow-up work and the current status of
open questions. The current version (v5) contains several minor bug fixes in
the proof of Lemma 7.10. J. of the ACM (JACM), Volume 62, Issue 2, May 201
Prophet Inequalities with Limited Information
In the classical prophet inequality, a gambler observes a sequence of
stochastic rewards and must decide, for each reward ,
whether to keep it and stop the game or to forfeit the reward forever and
reveal the next value . The gambler's goal is to obtain a constant
fraction of the expected reward that the optimal offline algorithm would get.
Recently, prophet inequalities have been generalized to settings where the
gambler can choose items, and, more generally, where he can choose any
independent set in a matroid. However, all the existing algorithms require the
gambler to know the distribution from which the rewards are
drawn.
The assumption that the gambler knows the distribution from which
are drawn is very strong. Instead, we work with the much simpler
assumption that the gambler only knows a few samples from this distribution. We
construct the first single-sample prophet inequalities for many settings of
interest, whose guarantees all match the best possible asymptotically,
\emph{even with full knowledge of the distribution}. Specifically, we provide a
novel single-sample algorithm when the gambler can choose any elements
whose analysis is based on random walks with limited correlation. In addition,
we provide a black-box method for converting specific types of solutions to the
related \emph{secretary problem} to single-sample prophet inequalities, and
apply it to several existing algorithms. Finally, we provide a constant-sample
prophet inequality for constant-degree bipartite matchings.
We apply these results to design the first posted-price and multi-dimensional
auction mechanisms with limited information in settings with asymmetric
bidders
Simple and Near-Optimal Mechanisms For Market Intermediation
A prevalent market structure in the Internet economy consists of buyers and
sellers connected by a platform (such as Amazon or eBay) that acts as an
intermediary and keeps a share of the revenue of each transaction. While the
optimal mechanism that maximizes the intermediary's profit in such a setting
may be quite complicated, the mechanisms observed in reality are generally much
simpler, e.g., applying an affine function to the price of the transaction as
the intermediary's fee. Loertscher and Niedermayer [2007] initiated the study
of such fee-setting mechanisms in two-sided markets, and we continue this
investigation by addressing the question of when an affine fee schedule is
approximately optimal for worst-case seller distribution. On one hand our work
supplies non-trivial sufficient conditions on the buyer side (i.e. linearity of
marginal revenue function, or MHR property of value and value minus cost
distributions) under which an affine fee schedule can obtain a constant
fraction of the intermediary's optimal profit for all seller distributions. On
the other hand we complement our result by showing that proper affine
fee-setting mechanisms (e.g. those used in eBay and Amazon selling plans) are
unable to extract a constant fraction of optimal profit in the worst-case
seller distribution. As subsidiary results we also show there exists a constant
gap between maximum surplus and maximum revenue under the aforementioned
conditions. Most of the mechanisms that we propose are also prior-independent
with respect to the seller, which signifies the practical implications of our
result.Comment: To appear in WINE'14, the 10th conference on Web and Internet
Economic
Degree Distribution of Competition-Induced Preferential Attachment Graphs
We introduce a family of one-dimensional geometric growth models, constructed
iteratively by locally optimizing the tradeoffs between two competing metrics,
and show that this family is equivalent to a family of preferential attachment
random graph models with upper cutoffs. This is the first explanation of how
preferential attachment can arise from a more basic underlying mechanism of
local competition. We rigorously determine the degree distribution for the
family of random graph models, showing that it obeys a power law up to a finite
threshold and decays exponentially above this threshold.
We also rigorously analyze a generalized version of our graph process, with
two natural parameters, one corresponding to the cutoff and the other a
``fertility'' parameter. We prove that the general model has a power-law degree
distribution up to a cutoff, and establish monotonicity of the power as a
function of the two parameters. Limiting cases of the general model include the
standard preferential attachment model without cutoff and the uniform
attachment model.Comment: 24 pages, one figure. To appear in the journal: Combinatorics,
Probability and Computing. Note, this is a long version, with complete
proofs, of the paper "Competition-Induced Preferential Attachment"
(cond-mat/0402268
Social Ranking Techniques for the Web
The proliferation of social media has the potential for changing the
structure and organization of the web. In the past, scientists have looked at
the web as a large connected component to understand how the topology of
hyperlinks correlates with the quality of information contained in the page and
they proposed techniques to rank information contained in web pages. We argue
that information from web pages and network data on social relationships can be
combined to create a personalized and socially connected web. In this paper, we
look at the web as a composition of two networks, one consisting of information
in web pages and the other of personal data shared on social media web sites.
Together, they allow us to analyze how social media tunnels the flow of
information from person to person and how to use the structure of the social
network to rank, deliver, and organize information specifically for each
individual user. We validate our social ranking concepts through a ranking
experiment conducted on web pages that users shared on Google Buzz and Twitter.Comment: 7 pages, ASONAM 201
A fitness model for the Italian Interbank Money Market
We use the theory of complex networks in order to quantitatively characterize
the formation of communities in a particular financial market. The system is
composed by different banks exchanging on a daily basis loans and debts of
liquidity. Through topological analysis and by means of a model of network
growth we can determine the formation of different group of banks characterized
by different business strategy. The model based on Pareto's Law makes no use of
growth or preferential attachment and it reproduces correctly all the various
statistical properties of the system. We believe that this network modeling of
the market could be an efficient way to evaluate the impact of different
policies in the market of liquidity.Comment: 5 pages 5 figure
Kronecker Graphs: An Approach to Modeling Networks
How can we model networks with a mathematically tractable model that allows
for rigorous analysis of network properties? Networks exhibit a long list of
surprising properties: heavy tails for the degree distribution; small
diameters; and densification and shrinking diameters over time. Most present
network models either fail to match several of the above properties, are
complicated to analyze mathematically, or both. In this paper we propose a
generative model for networks that is both mathematically tractable and can
generate networks that have the above mentioned properties. Our main idea is to
use the Kronecker product to generate graphs that we refer to as "Kronecker
graphs".
First, we prove that Kronecker graphs naturally obey common network
properties. We also provide empirical evidence showing that Kronecker graphs
can effectively model the structure of real networks.
We then present KronFit, a fast and scalable algorithm for fitting the
Kronecker graph generation model to large real networks. A naive approach to
fitting would take super- exponential time. In contrast, KronFit takes linear
time, by exploiting the structure of Kronecker matrix multiplication and by
using statistical simulation techniques.
Experiments on large real and synthetic networks show that KronFit finds
accurate parameters that indeed very well mimic the properties of target
networks. Once fitted, the model parameters can be used to gain insights about
the network structure, and the resulting synthetic graphs can be used for null-
models, anonymization, extrapolations, and graph summarization
- …
