11,959 research outputs found
A Game-Theoretic Framework for Medium Access Control
In this paper, we generalize the random access game model, and show that it provides a general game-theoretic framework for designing contention based medium access control. We extend the random access game model to the network with multiple contention measure signals, study the design of random access games, and analyze different distributed algorithms achieving their equilibria. As examples, a series of utility functions is proposed for games achieving the maximum throughput in a network of homogeneous nodes. In a network with n traffic classes, an N-signal game model is proposed which achieves the maximum throughput under the fairness constraint among different traffic classes. In addition, the convergence of different dynamic algorithms such as best response, gradient play and Jacobi play under propagation delay and estimation error is established. Simulation results show that game model based protocols can achieve superior performance over the standard IEEE 802.11 DCF, and comparable performance as existing protocols with the best performance in literature
On Asymptotic Optimality of Dual Scheduling Algorithm In A Generalized Switch
Generalized switch is a model of a queueing system where parallel servers are interdependent and have time-varying service capabilities. This paper considers the dual scheduling algorithm that uses rate control and queue-length based scheduling to allocate resources for a generalized switch. We consider a saturated system in which each user has infinite amount of data to be served. We prove the asymptotic optimality of the dual scheduling algorithm for such a system, which says that the vector of average service rates of the scheduling algorithm maximizes some aggregate concave utility functions. As the fairness objectives can be achieved by appropriately choosing utility functions, the asymptotic optimality establishes the fairness properties of the dual scheduling algorithm.
The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research would include scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model
Random Access Game and Medium Access Control Design
Motivated partially by a control-theoretic viewpoint, we propose a game-theoretic model, called random access game, for contention control. We characterize Nash equilibria of random access games, study their dynamics, and propose distributed algorithms (strategy evolutions) to achieve Nash equilibria. This provides a general analytical framework that is capable of modeling a large class of system-wide quality-of-service (QoS) models via the specification of per-node utility functions, in which system-wide fairness or service differentiation can be achieved in a distributed manner as long as each node executes a contention resolution algorithm that is designed to achieve the Nash equilibrium. We thus propose a novel medium access method derived from carrier sense multiple access/collision avoidance (CSMA/CA) according to distributed strategy update mechanism achieving the Nash equilibrium of random access game. We present a concrete medium access method that adapts to a continuous contention measure called conditional collision probability, stabilizes the network into a steady state that achieves optimal throughput with targeted fairness (or service differentiation), and can decouple contention control from handling failed transmissions. In addition to guiding medium access control design, the random access game model also provides an analytical framework to understand equilibrium and dynamic properties of different medium access protocols
Same-Sign Dilepton Excesses and Vector-like Quarks
Multiple analyses from ATLAS and CMS collaborations, including searches for
ttH production, supersymmetric particles and vector-like quarks, observed
excesses in the same-sign dilepton channel containing b-jets and missing
transverse energy in the LHC Run 1 data. In the context of little Higgs
theories with T parity, we explain these excesses using vector-like T-odd
quarks decaying into a top quark, a W boson and the lightest T-odd particle
(LTP). For heavy vector-like quarks, decay topologies containing the LTP have
not been searched for at the LHC. The bounds on the masses of the T-odd quarks
can be estimated in a simplified model approach by adapting the search limits
for top/bottom squarks in supersymmetry. Assuming a realistic decay branching
fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also
comment on the possibility to fit excesses in different analyses in a common
framework.Comment: 1+17 pages and 11 figure
Testing Naturalness
Solutions to the electroweak hierarchy problem typically introduce a new
symmetry to stabilize the quadratic ultraviolet sensitivity in the self-energy
of the Higgs boson. The new symmetry is either broken softly or collectively,
as for example in supersymmetric and little Higgs theories. At low energies
such theories contain naturalness partners of the Standard Model fields which
are responsible for canceling the quadratic divergence in the squared Higgs
mass. Post the discovery of any partner-like particles, we propose to test the
aforementioned cancellation by measuring relevant Higgs couplings. Using the
fermionic top partners in little Higgs theories as an illustration, we
construct a simplified model for naturalness and initiate a study on testing
naturalness. After electroweak symmetry breaking, naturalness in the top sector
requires at leading order, where and
are the Higgs couplings to a pair of top quarks and top partners, respectively.
Using a multivariate method of Boosted Decision Tree to tag boosted particles
in the Standard Model, we show that, with a luminosity of 30 at a 100
TeV -collider, naturalness could be tested with a precision of 10 % for a
top partner mass up to 2.5 TeV.Comment: 20 pages, 7 figures, 2 table
Failure Localization in Power Systems via Tree Partitions
Cascading failures in power systems propagate non-locally, making the control
and mitigation of outages extremely hard. In this work, we use the emerging
concept of the tree partition of transmission networks to provide an analytical
characterization of line failure localizability in transmission systems. Our
results rigorously establish the well perceived intuition in power community
that failures cannot cross bridges, and reveal a finer-grained concept that
encodes more precise information on failure propagations within tree-partition
regions. Specifically, when a non-bridge line is tripped, the impact of this
failure only propagates within well-defined components, which we refer to as
cells, of the tree partition defined by the bridges. In contrast, when a bridge
line is tripped, the impact of this failure propagates globally across the
network, affecting the power flow on all remaining transmission lines. This
characterization suggests that it is possible to improve the system robustness
by temporarily switching off certain transmission lines, so as to create more,
smaller components in the tree partition; thus spatially localizing line
failures and making the grid less vulnerable to large-scale outages. We
illustrate this approach using the IEEE 118-bus test system and demonstrate
that switching off a negligible portion of transmission lines allows the impact
of line failures to be significantly more localized without substantial changes
in line congestion
Less is More: Real-time Failure Localization in Power Systems
Cascading failures in power systems exhibit non-local propagation patterns
which make the analysis and mitigation of failures difficult. In this work, we
propose a distributed control framework inspired by the recently proposed
concepts of unified controller and network tree-partition that offers strong
guarantees in both the mitigation and localization of cascading failures in
power systems. In this framework, the transmission network is partitioned into
several control areas which are connected in a tree structure, and the unified
controller is adopted by generators or controllable loads for fast timescale
disturbance response. After an initial failure, the proposed strategy always
prevents successive failures from happening, and regulates the system to the
desired steady state where the impact of initial failures are localized as much
as possible. For extreme failures that cannot be localized, the proposed
framework has a configurable design, that progressively involves and
coordinates more control areas for failure mitigation and, as a last resort,
imposes minimal load shedding. We compare the proposed control framework with
Automatic Generation Control (AGC) on the IEEE 118-bus test system. Simulation
results show that our novel framework greatly improves the system robustness in
terms of the N-1 security standard, and localizes the impact of initial
failures in majority of the load profiles that are examined. Moreover, the
proposed framework incurs significantly less load loss, if any, compared to
AGC, in all of our case studies
- …
