R/bandit_cmab_bernoulli.R
ContextualBernoulliBandit.Rd
Contextual Bernoulli multi-armed bandit where at least one context feature is active at a time.
bandit <- ContextualBernoulliBandit$new(weights)
weights
numeric matrix; d x k
matrix with probabilities of reward for d
contextual features
per k
arms
new(weights)
generates and initializes a new ContextualBernoulliBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: ContextualBernoulliBandit
, ContextualLogitBandit
,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
if (FALSE) { library(contextual) horizon <- 100 sims <- 100 policy <- LinUCBDisjointOptimizedPolicy$new(alpha = 0.9) weights <- matrix( c(0.4, 0.2, 0.4, 0.3, 0.4, 0.3, 0.1, 0.8, 0.1), nrow = 3, ncol = 3, byrow = TRUE) bandit <- ContextualBernoulliBandit$new(weights = weights) agent <- Agent$new(policy,bandit) history <- Simulator$new(agent, horizon, sims)$run() plot(history, type = "cumulative", regret = TRUE) }