Illustrates precaching of contexts and rewards.
TODO: Fix "attempt to select more than one element in integerOneIndex"
Contextual extension of BasicBernoulliBandit
.
Contextual extension of BasicBernoulliBandit
where a user specified d x k
dimensional
matrix takes the place of BasicBernoulliBandit
k
dimensional probability vector. Here,
each row d
represents a feature with k
reward probability values per arm.
For every t
, ContextualPrecachingBandit
randomly samples from its d
features/rows at
random, yielding a binary context
matrix representing sampled (all 1 rows) and unsampled (all 0)
features/rows. Next, ContextualPrecachingBandit
generates rewards
contingent on either sum or
mean (default) probabilities of each arm/column over all of the sampled features/rows.
bandit <- ContextualPrecachingBandit$new(weights)
weights
numeric matrix; d x k
dimensional matrix where each row d
represents a feature with
k
reward probability values per arm.
new(weights)
generates
and instantializes a new ContextualPrecachingBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).generate_bandit_data()
helper function called before Simulator
starts iterating over all time steps t
in T.
Pregenerates contexts
and rewards
.
Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: BasicBernoulliBandit
, ContextualLogitBandit
,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
if (FALSE) { horizon <- 100L simulations <- 100L # rows represent features, columns represent arms: context_weights <- matrix( c(0.4, 0.2, 0.4, 0.3, 0.4, 0.3, 0.1, 0.8, 0.1), nrow = 3, ncol = 3, byrow = TRUE) bandit <- ContextualPrecachingBandit$new(weights) agents <- list( Agent$new(EpsilonGreedyPolicy$new(0.1), bandit), Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit)) simulation <- Simulator$new(agents, horizon, simulations) history <- simulation$run() plot(history, type = "cumulative") }