82 research outputs found
Comparison-Based Choices
A broad range of on-line behaviors are mediated by interfaces in which people
make choices among sets of options. A rich and growing line of work in the
behavioral sciences indicate that human choices follow not only from the
utility of alternatives, but also from the choice set in which alternatives are
presented. In this work we study comparison-based choice functions, a simple
but surprisingly rich class of functions capable of exhibiting so-called
choice-set effects. Motivated by the challenge of predicting complex choices,
we study the query complexity of these functions in a variety of settings. We
consider settings that allow for active queries or passive observation of a
stream of queries, and give analyses both at the granularity of individuals or
populations that might exhibit heterogeneous choice behavior. Our main result
is that any comparison-based choice function in one dimension can be inferred
as efficiently as a basic maximum or minimum choice function across many query
contexts, suggesting that choice-set effects need not entail any fundamental
algorithmic barriers to inference. We also introduce a class of choice
functions we call distance-comparison-based functions, and briefly discuss the
analysis of such functions. The framework we outline provides intriguing
connections between human choice behavior and a range of questions in the
theory of sorting.Comment: 20 pages, 3 figure
Graph cluster randomization: network exposure to multiple universes
A/B testing is a standard approach for evaluating the effect of online
experiments; the goal is to estimate the `average treatment effect' of a new
feature or condition by exposing a sample of the overall population to it. A
drawback with A/B testing is that it is poorly suited for experiments involving
social interference, when the treatment of individuals spills over to
neighboring individuals along an underlying social network. In this work, we
propose a novel methodology using graph clustering to analyze average treatment
effects under social interference. To begin, we characterize graph-theoretic
conditions under which individuals can be considered to be `network exposed' to
an experiment. We then show how graph cluster randomization admits an efficient
exact algorithm to compute the probabilities for each vertex being network
exposed under several of these exposure conditions. Using these probabilities
as inverse weights, a Horvitz-Thompson estimator can then provide an effect
estimate that is unbiased, provided that the exposure model has been properly
specified.
Given an estimator that is unbiased, we focus on minimizing the variance.
First, we develop simple sufficient conditions for the variance of the
estimator to be asymptotically small in n, the size of the graph. However, for
general randomization schemes, this variance can be lower bounded by an
exponential function of the degrees of a graph. In contrast, we show that if a
graph satisfies a restricted-growth condition on the growth rate of
neighborhoods, then there exists a natural clustering algorithm, based on
vertex neighborhoods, for which the variance of the estimator can be upper
bounded by a linear function of the degrees. Thus we show that proper cluster
randomization can lead to exponentially lower estimator variance when
experimentally measuring average treatment effects under interference.Comment: 9 pages, 2 figure
Design and Analysis of Experiments in Networks: Reducing Bias from Interference
Estimating the effects of interventions in networks is complicated due to interference, such that the outcomes for one experimental unit may depend on the treatment assignments of other units. Familiar statistical formalism, experimental designs, and analysis methods assume the absence of this interference, and result in biased estimates of causal effects when it exists. While some assumptions can lead to unbiased estimates, these assumptions are generally unrealistic in the context of a network and often amount to assuming away the interference. In this work, we evaluate methods for designing and analyzing randomized experiments under minimal, realistic assumptions compatible with broad interference, where the aim is to reduce bias and possibly overall error in estimates of average effects of a global treatment. In design, we consider the ability to perform random assignment to treatments that is correlated in the network, such as through graph cluster randomization. In analysis, we consider incorporating information about the treatment assignment of network neighbors. We prove sufficient conditions for bias reduction through both design and analysis in the presence of potentially global interference; these conditions also give lower bounds on treatment effects. Through simulations of the entire process of experimentation in networks, we measure the performance of these methods under varied network structure and varied social behaviors, finding substantial bias reductions and, despite a bias–variance tradeoff, error reductions. These improvements are largest for networks with more clustering and data generating processes with both stronger direct effects of the treatment and stronger interactions between units. Keywords: causal inference; field experiments; peer effects; spillovers; social contagion; social network analysis; graph partitionin
Delay-dependent Stability of Genetic Regulatory Networks
Genetic regulatory networks are biochemical reaction systems, consisting of a network of interacting genes and associated proteins. The dynamics of genetic regulatory networks contain many complex facets that require careful consideration during the modeling process. The classical modeling approach involves studying systems of ordinary differential equations (ODEs) that model biochemical reactions in a deterministic, continuous, and instantaneous fashion. In reality, the dynamics of these systems are stochastic, discrete, and widely delayed. The first two complications are often successfully addressed by modeling regulatory networks using the Gillespie stochastic simulation algorithm (SSA), while the delayed behavior of biochemical events such as transcription and translation are often ignored due to their mathematically difficult nature. We develop techniques based on delay-differential equations (DDEs) and the delayed Gillespie SSA to study the effects of delays, in both continuous deterministic and discrete stochastic settings. Our analysis applies techniques from Floquet theory and advanced numerical analysis within the context of delay-differential equations, and we are able to derive stability sensitivities for biochemical switches and oscillators across the constituent pathways, showing which pathways in the regulatory networks improve or worsen the stability of the system attractors. These delay sensitivities can be far from trivial, and we offer a computational framework validated across multiple levels of modeling fidelity. This work suggests that delays may play an important and previously overlooked role in providing robust dynamical behavior for certain genetic regulatory networks, and perhaps more importantly, may offer an accessible tuning parameter for robust bioengineering
- …
