616 research outputs found
Payoff Performance of Fictitious Play
We investigate how well continuous-time fictitious play in two-player games
performs in terms of average payoff, particularly compared to Nash equilibrium
payoff. We show that in many games, fictitious play outperforms Nash
equilibrium on average or even at all times, and moreover that any game is
linearly equivalent to one in which this is the case. Conversely, we provide
conditions under which Nash equilibrium payoff dominates fictitious play
payoff. A key step in our analysis is to show that fictitious play dynamics
asymptotically converges the set of coarse correlated equilibria (a fact which
is implicit in the literature).Comment: 16 pages, 4 figure
Increasing the Action Gap: New Operators for Reinforcement Learning
This paper introduces new optimality-preserving operators on Q-functions. We
first describe an operator for tabular representations, the consistent Bellman
operator, which incorporates a notion of local policy consistency. We show that
this local consistency leads to an increase in the action gap at each state;
increasing this gap, we argue, mitigates the undesirable effects of
approximation and estimation errors on the induced greedy policies. This
operator can also be applied to discretized continuous space and time problems,
and we provide empirical results evidencing superior performance in this
context. Extending the idea of a locally consistent operator, we then derive
sufficient conditions for an operator to preserve optimality, leading to a
family of operators which includes our consistent Bellman operator. As
corollaries we provide a proof of optimality for Baird's advantage learning
algorithm and derive other gap-increasing operators with interesting
properties. We conclude with an empirical study on 60 Atari 2600 games
illustrating the strong potential of these new operators
Rainbow: Combining Improvements in Deep Reinforcement Learning
The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance.Comment: Under review as a conference paper at AAAI 201
- …
