Dueling Bandits with Team Comparisons

We introduce the dueling teams problem, a new online-learning setting in which the learner observes noisy comparisons of disjoint pairs of k-sized teams from a universe of n players. The goal of the learner is to minimize the number of duels required to identify, with high probability, a Condorcet winning team, i.e., a team which wins against any other disjoint team (with probability at least 1/2). Noisy comparisons are linked to a total order on the teams. We formalize our model by building upon the dueling bandits setting (Yue et al., 2012) and provide several algorithms, both for stochastic and deterministic settings. For the stochastic setting, we provide a reduction to the classical dueling bandits setting, yielding an algorithm that identifies a Condorcet winning team within O((n+k log(k)) logn,log k) 2 ) duels, where is a gap parameter. For deterministic feedback, we additionally present a gap-independent algorithm that identifies a Condorcet winning team within O(nk log(k) + k5) duels.

[1]  M. Breton,et al.  Separable preferences, strategyproofness, and decomposability , 1999 .

[2]  Soheil Mohajer,et al.  Active Learning for Top-K Rank Aggregation from Noisy Comparisons , 2017, ICML.

[3]  Yishay Mansour,et al.  Top-$k$ Combinatorial Bandits with Full-Bandit Feedback , 2020, ALT.

[4]  Ingemar J. Cox,et al.  Multi-Dueling Bandits and Their Application to Online Ranker Evaluation , 2016, CIKM.

[5]  Xi Chen,et al.  Optimal PAC Multiple Arm Identification with Applications to Crowdsourcing , 2014, ICML.

[6]  Peter Stone,et al.  Efficient Selection of Multiple Bandit Arms: Theory and Practice , 2010, ICML.

[7]  Harlan D. Mills,et al.  Coin Weighing Problems, On , 1964 .

[8]  Andrzej Pelc,et al.  Searching games with errors - fifty years of coping with liars , 2002, Theor. Comput. Sci..

[9]  Sébastien Bubeck,et al.  Multiple Identifications in Multi-Armed Bandits , 2012, ICML.

[10]  Maria-Florina Balcan,et al.  Learning Combinatorial Functions from Pairwise Comparisons , 2016, COLT.

[11]  Shie Mannor,et al.  Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems , 2006, J. Mach. Learn. Res..

[12]  Nicolò Cesa-Bianchi,et al.  Combinatorial Bandits , 2012, COLT.

[13]  Eyke Hüllermeier,et al.  Preference-based Online Learning with Dueling Bandits: A Survey , 2018, J. Mach. Learn. Res..

[14]  J. Farkas Theorie der einfachen Ungleichungen. , 1902 .

[15]  N. Shroff,et al.  The Sample Complexity of Best-k Items Selection from Pairwise Comparisons , 2020, ICML.

[16]  Baruch Awerbuch,et al.  Online linear optimization and adaptive routing , 2008, J. Comput. Syst. Sci..

[17]  Rémi Munos,et al.  Pure exploration in finitely-armed and continuous-armed bandits , 2011, Theor. Comput. Sci..

[18]  Ness B. Shroff,et al.  PAC Ranking from Pairwise and Listwise Queries: Lower Bounds and Upper Bounds , 2018, ArXiv.

[19]  Aditya Gopalan,et al.  Battle of Bandits , 2018, UAI.

[20]  Aurélien Garivier,et al.  On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models , 2014, J. Mach. Learn. Res..

[21]  Thorsten Joachims,et al.  The K-armed Dueling Bandits Problem , 2012, COLT.

[22]  Wei Chen,et al.  Combinatorial Pure Exploration of Multi-Armed Bandits , 2014, NIPS.

[23]  Aleksandrs Slivkins,et al.  Introduction to Multi-Armed Bandits , 2019, Found. Trends Mach. Learn..

[24]  Joel W. Burdick,et al.  Multi-dueling Bandits with Dependent Arms , 2017, UAI.

[25]  Jian Li,et al.  Towards Instance Optimal Bounds for Best Arm Identification , 2016, COLT.