1,840 research outputs found
Towards Minimax Online Learning with Unknown Time Horizon
We consider online learning when the time horizon is unknown. We apply a
minimax analysis, beginning with the fixed horizon case, and then moving on to
two unknown-horizon settings, one that assumes the horizon is chosen randomly
according to some known distribution, and the other which allows the adversary
full control over the horizon. For the random horizon setting with restricted
losses, we derive a fully optimal minimax algorithm. And for the adversarial
horizon setting, we prove a nontrivial lower bound which shows that the
adversary obtains strictly more power than when the horizon is fixed and known.
Based on the minimax solution of the random horizon setting, we then propose a
new adaptive algorithm which "pretends" that the horizon is drawn from a
distribution from a special family, but no matter how the actual horizon is
chosen, the worst-case regret is of the optimal rate. Furthermore, our
algorithm can be combined and applied in many ways, for instance, to online
convex optimization, follow the perturbed leader, exponential weights algorithm
and first order bounds. Experiments show that our algorithm outperforms many
other existing algorithms in an online linear optimization setting
Margin-based Ranking and an Equivalence between AdaBoost and RankBoost
We study boosting algorithms for learning to rank. We give a general margin-based bound for
ranking based on covering numbers for the hypothesis space. Our bound suggests that algorithms
that maximize the ranking margin will generalize well. We then describe a new algorithm, smooth
margin ranking, that precisely converges to a maximum ranking-margin solution. The algorithm
is a modification of RankBoost, analogous to “approximate coordinate ascent boosting.” Finally,
we prove that AdaBoost and RankBoost are equally good for the problems of bipartite ranking and
classification in terms of their asymptotic behavior on the training set. Under natural conditions,
AdaBoost achieves an area under the ROC curve that is equally as good as RankBoost’s; furthermore,
RankBoost, when given a specific intercept, achieves a misclassification error that is as good
as AdaBoost’s. This may help to explain the empirical observations made by Cortes andMohri, and
Caruana and Niculescu-Mizil, about the excellent performance of AdaBoost as a bipartite ranking
algorithm, as measured by the area under the ROC curve
- …
