Policy for the evaluation of policies with offline data through replay with propensity weighting.

Usage

  bandit <- OfflinePropensityWeightingBandit(formula,
                                             data, k = NULL, d = NULL,
                                             unique = NULL, shared = NULL,
                                             randomize = TRUE, replacement = TRUE,
                                             jitter = TRUE, arm_multiply = TRUE)

Arguments

formula

formula (required). Format: y.context ~ z.choice | x1.context + x2.xontext + ... | p.propensity When leaving out p.propensity, Doubly Robust Bandit uses marginal prob per arm for propensities: By default, adds an intercept to the context model. Exclude the intercept, by adding "0" or "-1" to the list of contextual features, as in: y.context ~ z.choice | x1.context + x2.xontext -1 | p.propensity

data

data.table or data.frame; offline data source (required)

k

integer; number of arms (optional). Optionally used to reformat the formula defined x.context vector as a k x d matrix. When making use of such matrix formatted contexts, you need to define custom intercept(s) when and where needed in data.table or data.frame.

d

integer; number of contextual features (optional) Optionally used to reformat the formula defined x.context vector as a k x d matrix. When making use of such matrix formatted contexts, you need to define custom intercept(s) when and where needed in data.table or data.frame.

randomize

logical; randomize rows of data stream per simulation (optional, default: TRUE)

replacement

logical; sample with replacement (optional, default: TRUE)

jitter

logical; add jitter to contextual features (optional, default: TRUE)

arm_multiply

logical; multiply the horizon by the number of arms (optional, default: TRUE)

threshold

float (0,1); Lower threshold or Tau on propensity score values. Smaller Tau makes for less biased estimates with more variance, and vice versa. For more information, see paper by Strehl at all (2010). Values between 0.01 and 0.05 are known to work well.

drop_value

logical; Whether to drop a sample when the chosen arm does not equal the sampled arm. When TRUE, the sample is dropped by setting the reward to null. When FALSE, the reward will be zero.

stabilized

logical; Whether to stabilize propensity weights. One common issue with inverse propensity weighting g is that samples with a propensity score very close to 0 will end up with an extremely large propensity weight, potentially making the weighted estimator highly unstable. A common alternative to the conventional weights are stabilized weights, which use the marginal probability of treatment instead of 1 in the weight numerator.

unique

integer vector; index of disjoint features (optional)

shared

integer vector; index of shared features (optional)

Methods

new(formula, data, k = NULL, d = NULL, unique = NULL, shared = NULL, randomize = TRUE, replacement = TRUE, jitter = TRUE, arm_multiply = TRUE)

generates and instantializes a new OfflinePropensityWeightingBandit instance.

get_context(t)

argument:

  • t: integer, time step t.

returns a named list containing the current d x k dimensional matrix context$X, the number of arms context$k and the number of features context$d.

get_reward(t, context, action)

arguments:

  • t: integer, time step t.

  • context: list, containing the current context$X (d x k context matrix), context$k (number of arms) and context$d (number of context features) (as set by bandit).

  • action: list, containing action$choice (as set by policy).

returns a named list containing reward$reward and, where computable, reward$optimal (used by "oracle" policies and to calculate regret).

post_initialization()

Randomize offline data by shuffling the offline data.table before the start of each individual simulation when self$randomize is TRUE (default)

References

Agarwal, Alekh, et al. "Taming the monster: A fast and simple algorithm for contextual bandits." International Conference on Machine Learning. 2014.

Strehl, Alex, et al. "Learning from logged implicit exploration data." Advances in Neural Information Processing Systems. 2010.

See also

Core contextual classes: Bandit, Policy, Simulator, Agent, History, Plot

Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit, OfflinePropensityWeightingBandit

Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy

Examples

if (FALSE) { library(contextual) ibrary(data.table) # Import myocardial infection dataset url <- "http://d1ie9wlkzugsxr.cloudfront.net/data_propensity/myocardial_propensity.csv" data <- fread(url) simulations <- 3000 horizon <- nrow(data) # arms always start at 1 data$trt <- data$trt + 1 # turn death into alive, making it a reward data$alive <- abs(data$death - 1) # calculate propensity weights m <- glm(I(trt-1) ~ age + risk + severity, data=data, family=binomial(link="logit")) data$p <-predict(m, type = "response") # run bandit - if you leave out p, Propensity Bandit uses marginal prob per arm for propensities: # table(private$z)/length(private$z) f <- alive ~ trt | age + risk + severity | p bandit <- OfflinePropensityWeightingBandit$new(formula = f, data = data) # Define agents. agents <- list(Agent$new(LinUCBDisjointOptimizedPolicy$new(0.2), bandit, "LinUCB")) # Initialize the simulation. simulation <- Simulator$new(agents = agents, simulations = simulations, horizon = horizon) # Run the simulation. sim <- simulation$run() # plot the results plot(sim, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "bottomright") }