A function based continuum multi-armed bandit where arms are chosen from a subset of the real line and the mean rewards are assumed to be a continuous function of the arms.
bandit <- ContinuumBandit$new(FUN)
continuous function.
new(FUN)
generates and instantializes a new ContinuumBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: BasicBernoulliBandit
, ContextualLogitBandit
,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
if (FALSE) { horizon <- 1500 simulations <- 100 continuous_arms <- function(x) { -0.1*(x - 5) ^ 2 + 3.5 + rnorm(length(x),0,0.4) } int_time <- 100 amplitude <- 0.2 learn_rate <- 0.3 omega <- 2*pi/int_time x0_start <- 2.0 policy <- LifPolicy$new(int_time, amplitude, learn_rate, omega, x0_start) bandit <- ContinuumBandit$new(FUN = continuous_arms) agent <- Agent$new(policy,bandit) history <- Simulator$new( agents = agent, horizon = horizon, simulations = simulations, save_theta = TRUE )$run() plot(history, type = "average", regret = FALSE) }