Bandit for the doubly robust evaluation of policies with offline data.
bandit <- OfflineDoublyRobustBandit(formula, data, k = NULL, d = NULL, unique = NULL, shared = NULL, randomize = TRUE)
formula
formula (required).
Format:
y.context ~ z.choice | x1.context + x2.xontext + ... | r1.reward + r2.reward ... | p.propensity
Here, r1.reward to rk.reward represent regression based precalculated rewards per arm.
When leaving out p.propensity, Doubly Robust Bandit uses marginal prob per arm for propensities:
Adds an intercept to the context model by default. Exclude the intercept, by adding "0" or "-1" to
the list of contextual features, as in: y.context ~ z.choice | x1.context + x2.xontext -1
data
data.table or data.frame; offline data source (required)
k
integer; number of arms (optional). Optionally used to reformat the formula defined x.context vector
as a k x d
matrix. When making use of such matrix formatted contexts, you need to define custom
intercept(s) when and where needed in data.table or data.frame.
d
integer; number of contextual features (optional) Optionally used to reformat the formula defined
x.context vector as a k x d
matrix. When making use of such matrix formatted contexts, you need
to define custom intercept(s) when and where needed in data.table or data.frame.
randomize
logical; randomize rows of data stream per simulation (optional, default: TRUE)
replacement
logical; sample with replacement (optional, default: FALSE)
jitter
logical; add jitter to contextual features (optional, default: FALSE)
unique
integer vector; index of disjoint features (optional)
shared
integer vector; index of shared features (optional)
threshold
float (0,1); Lower threshold or Tau on propensity score values. Smaller Tau makes for less biased estimates with more variance, and vice versa. For more information, see paper by Strehl at all (2010). Values between 0.01 and 0.05 are known to work well.
new(formula, data, k = NULL, d = NULL, unique = NULL, shared = NULL, randomize = TRUE)
generates and instantializes a new OfflineDoublyRobustBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).post_initialization()
Randomize offline data by shuffling the offline data.table before the start of each individual simulation when self$randomize is TRUE (default)
Dudík, Miroslav, John Langford, and Lihong Li. "Doubly robust policy evaluation and learning." arXiv preprint arXiv:1103.4601 (2011).
Agarwal, Alekh, et al. "Taming the monster: A fast and simple algorithm for contextual bandits." International Conference on Machine Learning. 2014.
Strehl, Alex, et al. "Learning from logged implicit exploration data." Advances in Neural Information Processing Systems. 2010.
Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: BasicBernoulliBandit
, ContextualLogitBandit
,
OfflineDoublyRobustBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
if (FALSE) { library(contextual) ibrary(data.table) # Import myocardial infection dataset url <- "http://d1ie9wlkzugsxr.cloudfront.net/data_propensity/myocardial_propensity.csv" data <- fread(url) simulations <- 300 horizon <- nrow(data) # arms always start at 1 data$trt <- data$trt + 1 # turn death into alive, making it a reward data$alive <- abs(data$death - 1) # Run regression per arm, predict outcomes, and save results, a column per arm f <- alive ~ age + risk + severity model_f <- function(arm) glm(f, data=data[trt==arm], family=binomial(link="logit"), y=FALSE, model=FALSE) arms <- sort(unique(data$trt)) model_arms <- lapply(arms, FUN = model_f) predict_arm <- function(model) predict(model, data, type = "response") r_data <- lapply(model_arms, FUN = predict_arm) r_data <- do.call(cbind, r_data) colnames(r_data) <- paste0("r", (1:max(arms))) # Bind data and model predictions data <- cbind(data,r_data) m <- glm(I(trt-1) ~ age + risk + severity, data=data, family=binomial(link="logit")) data$p <-predict(m, type = "response") f <- alive ~ trt | age + risk + severity | r1 + r2 | p bandit <- OfflineDoublyRobustBandit$new(formula = f, data = data) # Define agents. agents <- list(Agent$new(LinUCBDisjointOptimizedPolicy$new(0.2), bandit, "LinUCB"), Agent$new(FixedPolicy$new(1), bandit, "Arm1"), Agent$new(FixedPolicy$new(2), bandit, "Arm2")) # Initialize the simulation. simulation <- Simulator$new(agents = agents, simulations = simulations, horizon = horizon) # Run the simulation. sim <- simulation$run() # plot the results plot(sim, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "bottomright") plot(sim, type = "arms", limit_agents = "LinUCB") }