Performance measurement and data base design
Alfonso P. Cardenas, Larry F. Bowman, et al.
ACM Annual Conference 1975
We consider a class of multi-armed bandit problems where the set of available actions can be mapped to a convex, compact region of ℝd, sometimes denoted as the "continuum-armed bandit" problem. The paper establishes bounds on the efficiency of any arm-selection procedure under certain conditions on the class of possible underlying reward functions. Both finite-time lower bounds on the growth rate of the regret, as well as asymptotic upper bounds on the rates of convergence of the selected control values to the optimum are derived. We explicitly characterize the dependence of these convergence rates on the minimal rate of variation of the mean reward function in a neighborhood of the optimal control. The bounds can be used to demonstrate the asymptotic optimality of the Kiefer-Wolfowitz method of stochastic approximation with regard to a large class of possible mean reward functions. © 2009 IEEE.
Alfonso P. Cardenas, Larry F. Bowman, et al.
ACM Annual Conference 1975
J.P. Locquet, J. Perret, et al.
SPIE Optical Science, Engineering, and Instrumentation 1998
B. Wagle
EJOR
Zohar Feldman, Avishai Mandelbaum
WSC 2010