LinUCBHybridOptimizedPolicy is an optimized R implementation of "Algorithm 2 LinUCB" from Li (2010) "A contextual-bandit approach to personalized news article recommendation.".

Details

Each time step t, LinUCBHybridOptimizedPolicy runs a linear regression per arm that produces coefficients for each context feature d. Next, it observes the new context, and generates a predicted payoff or reward together with a confidence interval for each available arm. It then proceeds to choose the arm with the highest upper confidence bound.

Usage

policy <- LinUCBHybridOptimizedPolicy(alpha = 1.0)

Arguments

alpha

double, a positive real value R+; Hyper-parameter adjusting the balance between exploration and exploitation.

name

character string specifying this policy. name is, among others, saved to the History log and displayed in summaries and plots.

Parameters

A

d*d identity matrix

b

a zero vector of length d

Methods

new(alpha = 1)

Generates a new LinUCBHybridOptimizedPolicy object. Arguments are defined in the Argument section above.

set_parameters()

each policy needs to assign the parameters it wants to keep track of to list self$theta_to_arms that has to be defined in set_parameters()'s body. The parameters defined here can later be accessed by arm index in the following way: theta[[index_of_arm]]$parameter_name

get_action(context)

here, a policy decides which arm to choose, based on the current values of its parameters and, potentially, the current context.

set_reward(reward, context)

in set_reward(reward, context), a policy updates its parameter values based on the reward received, and, potentially, the current context.

References

Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010, April). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (pp. 661-670). ACM.

See also