Context-free Gaussian multi-armed bandit.

Details

Simulates k Gaussian arms where each arm models the reward as a normal distribution with provided mean mu and standard deviation sigma.

Usage

  bandit <- BasicGaussianBandit$new(mu_per_arm, sigma_per_arm)

Arguments

mu_per_arm

numeric vector; mean mu for each of the bandit's k arms

sigma_per_arm

numeric vector; standard deviation of additive Gaussian noise for each of the bandit's k arms. Set to zero for no noise.

Methods

new(mu_per_arm, sigma_per_arm)

generates and instantializes a new BasicGaussianBandit instance.

get_context(t)

argument:

  • t: integer, time step t.

returns a named list containing the current d x k dimensional matrix context$X, the number of arms context$k and the number of features context$d.

get_reward(t, context, action)

arguments:

  • t: integer, time step t.

  • context: list, containing the current context$X (d x k context matrix), context$k (number of arms) and context$d (number of context features) (as set by bandit).

  • action: list, containing action$choice (as set by policy).

returns a named list containing reward$reward and, where computable, reward$optimal (used by "oracle" policies and to calculate regret).

See also

Examples

if (FALSE) { horizon <- 100 sims <- 100 policy <- EpsilonGreedyPolicy$new(epsilon = 0.1) bandit <- BasicGaussianBandit$new(c(0,0,1), c(1,1,1)) agent <- Agent$new(policy,bandit) history <- Simulator$new(agent, horizon, sims)$run() plot(history, type = "cumulative", regret = TRUE) }