TODO: Optimization.

Details

Extension of ContextualLogitBandit modeling hybrid rewards with a combination of unique (or "disjoint") and shared contextual features.

Usage

  bandit <- ContextualHybridBandit$new(k, shared_features, unique_features, sigma = 1.0)

Arguments

k

integer; number of bandit arms

shared_features

integer; number of shared features

unique_features

integer; number of unique/disjoint features

sigma

integer; standard deviation of additive Gaussian noise

Methods

new(k, shared_features, unique_features, sigma = 1.0)

generates and instantializes a new ContextualHybridBandit instance.

get_context(t)

argument:

  • t: integer, time step t.

returns a named list containing the current d x k dimensional matrix context$X, the number of arms context$k and the number of features context$d.

get_reward(t, context, action)

arguments:

  • t: integer, time step t.

  • context: list, containing the current context$X (d x k context matrix), context$k (number of arms) and context$d (number of context features) (as set by bandit).

  • action: list, containing action$choice (as set by policy).

returns a named list containing reward$reward and, where computable, reward$optimal (used by "oracle" policies and to calculate regret).

post_initialization()

initializes d x k beta matrix.

See also

Examples

if (FALSE) { horizon <- 800L simulations <- 100L bandit <- ContextualHybridBandit$new(k = 100, shared_features = 10, unique_features = 2) agents <- list(Agent$new(ContextualLinTSPolicy$new(0.1), bandit), Agent$new(EpsilonGreedyPolicy$new(0.1), bandit), Agent$new(LinUCBGeneralPolicy$new(0.6), bandit), Agent$new(ContextualEpochGreedyPolicy$new(8), bandit), Agent$new(LinUCBHybridOptimizedPolicy$new(0.6), bandit), Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit)) simulation <- Simulator$new(agents, horizon, simulations) history <- simulation$run() plot(history, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "bottomright") }