We provide new lower bounds on the regret that must be suffered by adversarial bandit algorithms. The new results show that recent upper bounds that either (a) hold with high-probability or (b) depend on the total lossof the best arm or (c) depend on the quadratic variation of the losses, are close to tight. Besides this we prove two impossibility results. First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case. Second, the regret cannot scale with the effective range of the losses.

- Lire la suite de Refined Lower Bounds for Adversarial Bandits
- Identifiez-vous pour poster des commentaires

This paper is devoted to the estimation of conditional quantile, more precisely the quantile of the output of a real stochastic code whose inputs are in R d. In this purpose, we introduce a stochastic algorithm based on Robbins-Monro algorithm and on k-nearest neighbors theory. We propose conditions on the code for that algorithm to be convergent and study the non-asymptotic rate of convergence of the means square error. Finally, we give optimal parameters of the algorithm to obtain the best rate of convergence.

- Lire la suite de Conditional quantile sequential estimation for stochastic codes
- Identifiez-vous pour poster des commentaires

We study the problem of minimising regret in two-armed bandit problems with Gaussian rewards. Our objective is to use this simple setting to illustrate that strategies based on an exploration phase (up to a stopping time) followed by exploitation are necessarily suboptimal. The results hold regardless of whether or not the difference in means between the two arms is known.

- Lire la suite de On Explore-Then-Commit Strategies
- Identifiez-vous pour poster des commentaires

We revisit lower bounds on the regret in the case of multi-armed bandit problems. We obtain non-asymptotic, distribution-dependent bounds and provide straightforward proofs based only on well-known properties of Kullback-Leibler divergences. These bounds show in particular that in an initial phase the regret grows almost linearly, and that the well-known logarithmic growth of the regret only holds in a final phase. The proof techniques come to the essence of the information-theoretic arguments used and they are deprived of all unnecessary complications.

- Lire la suite de Explore First, Exploit Next: The True Shape of Regret in Bandit Problems
- Identifiez-vous pour poster des commentaires

We study an original problem of pure exploration in a strategic bandit model motivated by Monte Carlo Tree Search. It consists in identifying the best action in a game, when the player may sample random outcomes of sequentially chosen pairs of actions. We propose two strategies for the fixed-confidence setting: Maximin-LUCB, based on lower-and upper-confidence bounds; and Maximin-Racing, which operates by successively eliminating the sub-optimal actions. We discuss the sample complexity of both methods and compare their performance empirically.

- Lire la suite de Maximin Action Identification: A New Bandit Framework for Games
- Identifiez-vous pour poster des commentaires

We provide a complete characterization of the complexity of best-arm identification in one-parameter bandit problems. We prove a new, tight lower bound on the sample complexity. We propose the 'Track-and-Stop' strategy, which is proved to be asymptotically optimal. It consists in a new sampling rule (which tracks the optimal proportions of arm draws highlighted by the lower bound) and in a stopping rule named after Chernoff, for which we give a new analysis.

- Lire la suite de Optimal Best Arm Identification with Fixed Confidence
- Identifiez-vous pour poster des commentaires

We address the practical construction of asymptotic confidence intervals for smooth (i.e., pathwise differentiable), real-valued statistical parameters by targeted learning from independent and identically distributed data in contexts where sample size is so large that it poses computational challenges. We observe some summary measure of all data and select a sub-sample from the complete data set by Poisson rejective sampling with unequal inclusion probabilities based on the summary measures. Targeted learning is carried out from the easier to handle sub-sample.

- Lire la suite de Practical targeted learning from large data sets by survey sampling
- Identifiez-vous pour poster des commentaires

We study a variant of the multi-armed bandit problem with multiple plays in which the user wishes to sample the m out of k arms with the highest expected rewards, but at any given time can only sample ell≤m arms. When ell=m, Thompson sampling was recently shown to be asymptotically efficient. We derive an asymptotic regret lower bound for any uniformly efficient algorithm in our new setting where ell may be less than m. We then establish the asymptotic optimality of Thompson sampling for Bernoulli rewards, where our proof technique differs from earlier methods even when ell=m.

- Lire la suite de Asymptotically Optimal Algorithms for Multiple Play Bandits with Partial Feedback
- Identifiez-vous pour poster des commentaires

We consider a setting where a real-valued variable of cause X affects a real-valued variable of effect Y in the presence of a context variable W. The objective is to assess to what extent (X, W) influences Y while making as few assumptions as possible on the unknown distribution of (W, X, Y). Based on a user-supplied marginal structural model, our new variable importance measure is non-parametric and context-adjusted. It generalizes the variable importance measure introduced by Chambaz et al. [4].

Le projet ANR SPADRO soutient la Troisième Conférence ISNPS.

En particulier, SPADRO organise et sponsorise la session invitée intitulée *Semiparametric inference*. Elle consistera en quatre exposés, d'Emilien Joly (Modal'X, Paris Ouest Nanterre), Alex Luedtke (Fred Hutch), Wenjing Zheng (Division of Biostatistics, UC Berkeley) et Antoine Chambaz (Modal'X, Université Paris Ouest).

- Lire la suite de SPADRO soutient la Troisième Conférence ISNPS
- Identifiez-vous pour poster des commentaires