Samples data from linearly parameterized arms.
The reward for context X and arm j is given by X^T beta_j, for some latent set of parameters beta_j : j = 1, ..., k. The beta's are sampled uniformly at random, the contexts are Gaussian, and sigma-noise is added to the rewards.
bandit <- ContextualLinearBandit$new(k, d, sigma = 0.1, binary_rewards = FALSE)
k
integer; number of bandit arms
d
integer; number of contextual features
sigma
numeric; standard deviation of the additive noise. Set to zero for no noise. Default is 0.1
binary_rewards
logical; when set to FALSE
(default) ContextualLinearBandit generates Gaussian rewards.
When set to TRUE
, rewards are binary (0/1).
new(k, d, sigma = 0.1, binary_rewards = FALSE)
generates and
instantializes a new ContextualLinearBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).post_initialization()
initializes d x k
beta matrix.
Riquelme, C., Tucker, G., & Snoek, J. (2018). Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling. arXiv preprint arXiv:1802.09127.
Implementation follows https://github.com/tensorflow/models/tree/master/research/deep_contextual_bandits
Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: BasicBernoulliBandit
, ContextualLogitBandit
,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
if (FALSE) { horizon <- 800L simulations <- 30L bandit <- ContextualLinearBandit$new(k = 5, d = 5) agents <- list(Agent$new(EpsilonGreedyPolicy$new(0.1), bandit), Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit)) simulation <- Simulator$new(agents, horizon, simulations) history <- simulation$run() plot(history, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "right") }