Welcome to the StratifiedMedicine R package. The overall goal of this package is to develop analytic and visualization tools to aid in stratified and personalized medicine. Stratified medicine aims to find subsets or subgroups of patients with similar treatment effects, for example responders vs non-responders, while personalized medicine aims to understand treatment effects at the individual level (does a specific individual respond to treatment A?). Development of this package is ongoing.

Currently, the main algorithm in this package is “PRISM” (Patient Response Identifiers for Stratified Medicine; Jemielita and Mehrotra 2019, https://arxiv.org/abs/1912.03337). Given a data-structure of \((Y, A, X)\) (outcome(s), treatments, covariates), PRISM is a five step procedure:

**Estimand**: Determine the question(s) or estimand(s) of interest. For example, \(\theta_0 = E(Y|A=1)-E(Y|A=0)\), where A is a binary treatment variable. While this isn’t an explicit step in the PRISM function, the question of interest guides how to set up PRISM.**Filter (filter)**: Reduce covariate space by removing variables unrelated to outcome/treatment. Formally: \[ filter(Y, A, X) \longrightarrow (Y, A, X^{\star}) \] where \(X^{\star}\) has potentially lower dimension than \(X\).**Patient-level estimate (ple)**: Estimate counterfactual patient-level quantities, for example the individual treatment effect, \(\theta(x) = E(Y|X=x,A=1)-E(Y|X=x,A=0)\). Formally: \[ ple(Y, A, X^{\star}) \longrightarrow \hat{\mathbf{\Theta}}(X^{\star}) \] where \(\hat{\mathbf{\theta}}(X^{\star})\) is the matrix of patient-level estimates.For example, these could refer to counterfactual estimates of \([E(Y|A=1,X=x), E(Y|A=0, X=X), E(Y|A=1,X=x)-E(Y|A=0,X=x)]\), or \([RMST_{\tau}(Y|A=1,X=x)-RMST_{\tau}(Y|A=0,X=x)]\) where RMST refers to the restricted mean survival time with truncation time \(\tau\).**Subgroup model (submod)**: Partition the data into subsets of patients (likely with similar treatment effects). Formally: \[ submod(Y, A, X^{\star}, \hat{\mathbf{\Theta}}(X^{\star})) \longrightarrow \mathbf{S}(X^{\star}) \] where \(\mathbf{S}(X^{\star})\) is a distinct set of rules that define the \(k=0,...,K\) discovered subgroups, for example \(\mathbf{S}(X^{\star}) = \{X_1=0, X_2=0\}\). Note that subgroups could be formed using the observed outcomes, PLEs, or both. By default, \(k=0\) corresponds to the overall population.**Parameter estimation and inference (param)**: For the overall population and discovered subgroups, output point estimates and variability metrics. Formally: \[ param(Y, A, X^{\star}, \hat{\mathbf{\theta}}(X^{\star}), \mathbf{S}(X^{\star}) ) \longrightarrow \{ \hat{\theta}_{k}, SE(\hat{\theta}_k), CI_{\alpha}(\hat{\theta}_{k}), P(\hat{\theta}_{k} > c) \} \text{ for } k=0,...K \] where \(\hat{\theta}_{k}\) is the point-estimate, \(SE(\hat{\theta}_k)\) is the standard error, \(CI_{\alpha}(\hat{\theta}_{k})\) is a two (or one) sided confidence interval with nominal coverage \(1-\alpha\), and \(P(\hat{\theta}_{k} > c)\) is a probability statement for some constant \(c\) (ex: \(c=0\)). These outputs are crucial for decision making and can also correspond to multiple estimates for each subgroup and overall. For binary/continuous outcomes, the default is to output point-estimates, SEs, CIs, and p-values for corresponding estimands \([E(Y|A=1,X=x), E(Y|A=0,X=X), E(Y|A=1,X=x)-E(Y|A=0,X=x)]\) in each discovered subgroup and overall.

Ultimately, PRISM provides information at the patient-level, the subgroup-level (if any), and the overall population. While there are defaults in place, the user can also input their own functions/model wrappers into the PRISM algorithm. We will demonstrate this later. PRISM can also be run without treatment assignment (A=NULL); in this setting, the focus is on finding subgroups based on prognostic effects. The below table describes default PRISM configurations for different family (gaussian, biomial, survival) and treatment (no treatment vs treatment) settings, including the associated estimands. Note that OLS refers to ordinary least squares (linear regression), GLM refers to generalized linear model, and MOB refers to model based partitioning (Zeileis, Hothorn, Hornik 2008; Seibold, Zeileis, Hothorn 2016). To summarise, default models include elastic net (Zou and Hastie 2005) for filtering, random forest (“ranger” R package) for patient-level /counterfactual estimation, and MOB (through “partykit” R package; lmtree, glmtree, and ctree (Hothorn, Hornik, Zeileis 2005)). When treatment assignment is provided, parameter estimation for continuous and binary outcomes involves averaging the patient-level estimates within the overall population and discovered subgroups (more details later). For survival outcomes, the cox regression hazard ratio (HR) or RMST (from the survR2 package) is used.

`#> Warning: package 'knitr' was built under R version 3.5.2`

Step | gaussian | binomial | survival |
---|---|---|---|

estimand(s) | E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) |
E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) |
HR(A=1 vs A=0) |

filter | Elastic Net (filter_glmnet) |
Elastic Net (filter_glmnet) |
Elastic Net (filter_glmnet) |

ple | Random Forest (ple_ranger) |
Random Forest (ple_ranger) |
Random Forest (ple_ranger) |

submod | MOB(OLS) (submod_lmtree) |
MOB(GLM) (submod_glmtree) |
MOB(weibull) (submod_weibull) |

param | Average of PLEs (param_ple) |
Average of PLEs (param_ple) |
Hazard Ratios (param_HR) |

Step | gaussian | binomial | survival |
---|---|---|---|

estimand(s) | E(Y) | Prob(Y) | RMST |

filter | Elastic Net (filter_glmnet) |
Elastic Net (filter_glmnet) |
Elastic Net (filter_glmnet) |

ple | Random Forest (ple_ranger) |
Random Forest (ple_ranger) |
Random Forest (ple_ranger) |

submod | Conditional Inference Trees submod_ctree |
Conditional Inference Trees submod_ctree |
Conditional Inference Trees submod_ctree |

param | OLS (param_lm) |
OLS (param_lm) |
RMST (param_rmst) |

Consider a continuous outcome (ex: % change in tumor size) with a binary treatment (study drug vs standard of care). The estimand of interest is the average treatment effect, \(\theta_0 = E(Y|A=1)-E(Y|A=0)\). First, we simulate continuous data where roughly 30% of the patients receive no treatment-benefit for using \(A=1\) vs \(A=0\). Responders vs non-responders are defined by the continuous predictive covariates \(X_1\) and \(X_2\) for a total of four subgroups. Subgroup treatment effects are: \(\theta_{1} = 0\) (\(X_1 \leq 0, X_2 \leq 0\)), \(\theta_{2} = 0.25 (X_1 > 0, X_2 \leq 0)\), \(\theta_{3} = 0.45 (X_1 \leq 0, X2 > 0\)), \(\theta_{4} = 0.65 (X_1>0, X_2>0)\).

```
library(ggplot2)
library(dplyr)
library(partykit)
library(StratifiedMedicine)
library(survival)
dat_ctns = generate_subgrp_data(family="gaussian")
Y = dat_ctns$Y
X = dat_ctns$X # 50 covariates, 46 are noise variables, X1 and X2 are truly predictive
A = dat_ctns$A # binary treatment, 1:1 randomized
length(Y)
#> [1] 800
table(A)
#> A
#> 0 1
#> 409 391
dim(X)
#> [1] 800 50
```

For continuous outcome data (family=“gaussian”), the default PRISM configuration is: (1) filter_glmnet (elastic net), (2) ple_ranger (treatment-specific random forest models), (3) submod_lmtree (model-based partitioning with OLS loss), and (4) param_ple (parameter estimation/inference through the PLEs). (Jemielita and Mehrotra 2019) show that this configuration performs quite well in terms of bias, efficiency, coverage, and selecting the right predictive covariates. To run PRISM, at a minimum, the outcome (Y), treatment (A), and covariates (X) must be provided. See below.

```
# PRISM Default: filter_glmnet, ple_ranger, submod_lmtree, param_ple #
res0 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_lmtree
#> Parameter Estimation: param_ple
summary(res0)
#> $`PRISM Configuration`
#> [1] "filter_glmnet => ple_ranger => submod_lmtree => param_ple"
#>
#> $`Variables that Pass Filter`
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
#>
#> $`Number of Identified Subgroups`
#> [1] 5
#>
#> $`Variables that Define the Subgroups`
#> [1] "X1, X2, X26"
#>
#> $`Parameter Estimates`
#> Subgrps N estimand est SE alpha CI
#> 1 0 800 E(Y|A=0) 1.6398 0.0457 0.05 [1.5501,1.7295]
#> 4 3 149 E(Y|A=0) 1.2830 0.1117 0.05 [1.0623,1.5037]
#> 7 4 277 E(Y|A=0) 1.6128 0.0671 0.05 [1.4808,1.7448]
#> 10 7 99 E(Y|A=0) 1.5897 0.1407 0.05 [1.3105,1.8689]
#> 13 8 168 E(Y|A=0) 1.7669 0.0893 0.05 [1.5906,1.9431]
#> 16 9 107 E(Y|A=0) 2.0534 0.1353 0.05 [1.7851,2.3217]
#> 2 0 800 E(Y|A=1) 1.8395 0.0487 0.05 [1.7439,1.9352]
#> 5 3 149 E(Y|A=1) 1.3181 0.1093 0.05 [1.102,1.5341]
#> 8 4 277 E(Y|A=1) 1.6734 0.0771 0.05 [1.5216,1.8251]
#> 11 7 99 E(Y|A=1) 1.9215 0.1316 0.05 [1.6603,2.1827]
#> 14 8 168 E(Y|A=1) 2.0414 0.0977 0.05 [1.8484,2.2344]
#> 17 9 107 E(Y|A=1) 2.6031 0.1162 0.05 [2.3728,2.8335]
#> 3 0 800 E(Y|A=1)-E(Y|A=0) 0.1997 0.0633 0.05 [0.0755,0.324]
#> 6 3 149 E(Y|A=1)-E(Y|A=0) 0.0351 0.1534 0.05 [-0.2681,0.3383]
#> 9 4 277 E(Y|A=1)-E(Y|A=0) 0.0606 0.1010 0.05 [-0.1382,0.2593]
#> 12 7 99 E(Y|A=1)-E(Y|A=0) 0.3318 0.1901 0.05 [-0.0454,0.709]
#> 15 8 168 E(Y|A=1)-E(Y|A=0) 0.2745 0.1310 0.05 [0.0158,0.5332]
#> 18 9 107 E(Y|A=1)-E(Y|A=0) 0.5498 0.1775 0.05 [0.1979,0.9016]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res0) # same as plot(res0, type="tree")
```

```
## This is the same as running ##
# res1 = PRISM(Y=Y, A=A, X=X, family="gaussian", filter="filter_glmnet",
# ple = "ple_ranger", submod = "submod_lmtree", param="param_ple")
```

The summary gives a high-level overview of the findings (number of subgroups, parameter estimates, variables that survived the filter). The default plot() function currently combines tree plots with parameter estimates using the “ggparty” package. We can als directly look for prognostic effects by specifying omitting A (treatment) from PRISM:

```
# PRISM Default: filter_glmnet, ple_ranger, submod_ctree, param_lm #
res_prog = PRISM(Y=Y, X=X)
#> No Treatment Variable (A) Provided: Searching for Prognostic Effects
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_ctree
#> Parameter Estimation: param_lm
# res_prog = PRISM(Y=Y, A=NULL, X=X) #also works
summary(res_prog)
#> $`PRISM Configuration`
#> [1] "filter_glmnet => ple_ranger => submod_ctree => param_lm"
#>
#> $`Variables that Pass Filter`
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
#>
#> $`Number of Identified Subgroups`
#> [1] 6
#>
#> $`Variables that Define the Subgroups`
#> [1] "X2, X1, X26"
#>
#> $`Parameter Estimates`
#> Subgrps N estimand est SE alpha CI
#> 1 0 800 E(Y) 1.6966 0.0372 0.05 [1.6235,1.7697]
#> 2 4 132 E(Y) 1.1119 0.0970 0.05 [0.92,1.3038]
#> 3 5 266 E(Y) 1.5107 0.0636 0.05 [1.3855,1.636]
#> 4 7 113 E(Y) 1.7016 0.0995 0.05 [1.5045,1.8987]
#> 5 8 122 E(Y) 2.1780 0.0856 0.05 [2.0085,2.3474]
#> 6 10 87 E(Y) 1.9091 0.1006 0.05 [1.7091,2.1091]
#> 7 11 80 E(Y) 2.6842 0.1133 0.05 [2.4586,2.9097]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res_prog)
```

Next, circling back to the first PRISM model with treatment included, let’s review other core PRISM outputs and plotting functionality. Results relating to the filter include “filter.mod” (model output) and “filter.vars” (variables that pass the filter).

```
# elastic net model: loss by lambda #
plot(res0$filter.mod)
```

```
## Variables that remain after filtering ##
res0$filter.vars
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
# All predictive variables (X1,X2) and prognostic variables (X3,X5, X7) remains.
```

Results relating to the PLE model include “ple.mod” (model output), “mu.train” (training predictions), and “mu.test” (test predictions) where, for continuous or binary data, predictions are of E(Y|X,A=a) and E(Y|X,A=1)-E(Y|X,A=0). The PLEs, or individual treatment effects, are informative of the overall treatment heterogeneity and can be visualized through built-in waterfall plots. In this case, roughly 73% receive no benefit from treatment A=1 vs A=0. PRISM plots are built using “ggplot2”, making it easy to enhance plot visualizations. For example,

```
prob.PLE = mean(I(res0$mu_train$PLE>0))
# Waterfall Plot #
plot(res0, type="PLE:waterfall")+geom_vline(xintercept = 0) +
geom_text(x=200, y=1, label=paste("Prob(PLE>0)=", prob.PLE, sep=""))
```

Next, the subgroup model (lmtree), identifies 4-subgroups based on varying treatment effects. By plotting the subgroup model object (“submod.fit$mod”)“, we see that partitions are made through X1 (predictive) and X2 (predictive). At each node, parameter estimates for node (subgroup) specific OLS models, \(Y\sim \beta_0+\beta_1*A\). For example, patients in nodes 4 and 6 have estimated treatment effects of 0.47 and 0.06 respectively. Subgroup predictions for the train/test set can be found in the”out.train" and “out.test” data-sets.

`plot(res0$submod.fit$mod, terminal_panel = NULL)`

```
table(res0$out.train$Subgrps)
#>
#> 3 4 7 8 9
#> 149 277 99 168 107
table(res0$out.test$Subgrps)
#>
#> 3 4 7 8 9
#> 149 277 99 168 107
```

These estimates tend to be overly positive or negative, as the same data that trains the subgroup model is used to estimate the treatment effects. Resampling, such as bootstrapping, can generally be used for “de-biased” treatment effect estimates and obtain valid inference (more details later).

For continuous and binary data, an alternative approach without resampling is to directly use the PLEs for parameter estimation and inference (param=“param_ple”). Let \(E(Y|X=x,A=a) = \mu(x, a)\) correspond to the outcome regression model(s) with estimates \(\hat{\mu}(x, a)\). These estimates come directly from the fitted PLE model(s), in this case, treatment-specific random forest models. For the overall population and each discovered subgroup (\(s=0,...,S\)), the treatment effect (or risk difference) can be estimated by averaging the patient-specific treatment effect estimates (PLEs): \[ \hat{\theta}_k = \frac{1}{n_k} \sum_{i \in S_k} {\hat{\theta}}(x_i) \] where \(\hat{\theta}(x_i)=\hat{\mu}(a=1,x)-\hat{\mu}(a=0,x)\). For SEs / CIs, we utilize “pseudo-outcomes”: \[ Y^{\star}_i = \frac{AY - (A-\hat{\pi}(x))\hat{\mu}(a=1,x)}{\hat{\pi}(x)} - \frac{(1-A)Y - (A-\hat{\pi}(x))\hat{\mu}(a=0,x)}{1-\hat{\pi}(x)}\] where \(\pi(x)=P(A=1|X)\), or the treatment assignment probability for an individual. In a randomized controlled trial, this can be replaced by the marginal probability, \(P(A=1|X)\). Note that \(E(Y^{\star}_i)=E(Y|A=1,X)-E(Y|A=0,X)\) and \(E(n_k^{-1}\sum_{i \in S_k} Y^{\star}_i)= E(Y|A=1, X \in S_k)-E(Y|A=0, X \in S_k)\). Next: \[SE(\hat{\theta}_k) = \sqrt{ n_k ^ {-2} \sum_{i \in S_k} \left( Y^{\star}_i-\hat{\theta}(x_i) \right)^2 } \] CIs can then be formed using t- or Z-intervals. For example, a two-sided 95% Z-interval, \(CI_{\alpha}(\hat{\theta}_{k}) = \left[\hat{\theta}_{k} \pm 1.96*SE(\hat{\theta}_k) \right]\)

Moving back to the PRISM outputs, for any of the provided “param” options, a key output is the object “param.dat”. By default, “param.dat” contain point-estimates, standard errors, lower/upper confidence intervals (depends on alpha_s and alpha_ovrl) and p-values. This output feeds directly into previously shown default (“tree”) and “forest” plot.

```
## Overall/subgroup specific parameter estimates/inference
res0$param.dat
#> Subgrps N estimand est SE LCL
#> 1 0 800 E(Y|A=0) 1.63979818 0.04567662 1.55013782
#> 2 0 800 E(Y|A=1) 1.83953825 0.04871379 1.74391613
#> 3 0 800 E(Y|A=1)-E(Y|A=0) 0.19974008 0.06329221 0.07550143
#> 4 3 149 E(Y|A=0) 1.28297958 0.11169103 1.06226442
#> 5 3 149 E(Y|A=1) 1.31807996 0.10932336 1.10204360
#> 6 3 149 E(Y|A=1)-E(Y|A=0) 0.03510038 0.15343734 -0.26811059
#> 7 4 277 E(Y|A=0) 1.61280771 0.06705679 1.48079995
#> 8 4 277 E(Y|A=1) 1.67336273 0.07708837 1.52160684
#> 9 4 277 E(Y|A=1)-E(Y|A=0) 0.06055502 0.10098218 -0.13823814
#> 10 7 99 E(Y|A=0) 1.58968017 0.14067817 1.31050892
#> 11 7 99 E(Y|A=1) 1.92148579 0.13161892 1.66029233
#> 12 7 99 E(Y|A=1)-E(Y|A=0) 0.33180562 0.19007951 -0.04540098
#> 13 8 168 E(Y|A=0) 1.76688140 0.08927908 1.59062031
#> 14 8 168 E(Y|A=1) 2.04137779 0.09774962 1.84839357
#> 15 8 168 E(Y|A=1)-E(Y|A=0) 0.27449640 0.13104528 0.01577751
#> 16 9 107 E(Y|A=0) 2.05338724 0.13530977 1.78512246
#> 17 9 107 E(Y|A=1) 2.60314625 0.11616776 2.37283238
#> 18 9 107 E(Y|A=1)-E(Y|A=0) 0.54975901 0.17746914 0.19790919
#> UCL pval alpha Prob(>0)
#> 1 1.7294585 8.032773e-169 0.05 1.0000000
#> 2 1.9351604 7.200995e-180 0.05 1.0000000
#> 3 0.3239787 1.660367e-03 0.05 0.9991998
#> 4 1.5036947 3.103578e-22 0.05 1.0000000
#> 5 1.5341163 9.481730e-24 0.05 1.0000000
#> 6 0.3383114 8.193710e-01 0.05 0.5904724
#> 7 1.7448155 1.087589e-69 0.05 1.0000000
#> 8 1.8251186 1.235131e-61 0.05 1.0000000
#> 9 0.2593482 5.492246e-01 0.05 0.7256337
#> 10 1.8688514 1.876468e-19 0.05 1.0000000
#> 11 2.1826792 2.520863e-26 0.05 1.0000000
#> 12 0.7090122 8.401201e-02 0.05 0.9595611
#> 13 1.9431425 1.190149e-45 0.05 1.0000000
#> 14 2.2343620 1.958038e-48 0.05 1.0000000
#> 15 0.5332153 3.770973e-02 0.05 0.9818998
#> 16 2.3216520 2.475816e-28 0.05 1.0000000
#> 17 2.8334601 5.223720e-42 0.05 1.0000000
#> 18 0.9016088 2.496084e-03 0.05 0.9990251
## Forest plot: Overall/subgroup specific parameter estimates (CIs)
plot(res0, type="tree")
```

`plot(res0, type="forest")`

PLE depence plots or heatmaps can also be generated from PRISM outputs. By default, if no grid data is supplied, if a variable is categorical/factor then all values are used, otherwise if continuous, we take 20 equally spaced bins. Regardless, based on a grid of values (with up to three variables), PLEs are estimated for each patient by fixing the grid variables to specific values. We then average the PLEs to obtain a point-estimate for each specific set of grid values, and can likewise calculate probabilities. See below; note that the heatmap is also consistent with the truth (treatment benefit for \(X_1>0, X_2>0\) patients).

```
plot_dependence(res0, vars="X1")
#> $res.est
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```

```
plot_dependence(res0, vars="X2")
#> $res.est
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```

```
plot_dependence(res0, vars=c("X1", "X2"))
#> $heatmap.est
```

```
#>
#> $heatmap.prob
```

The hyper-parameters for the individual steps of PRISM can also be easily modified. For example, “filter_glmnet” by default selects covariates based on “lambda.min”, “ple_ranger” requires nodes to contain at least 10% of the total observations, and “submod_lmtree” requires nodes to contain at least 10% of the total observations. To modify this:

```
# PRISM Default: filter_glmnet, ple_ranger, submod_lmtree, param_ple #
# Change hyper-parameters #
res_new_hyper = PRISM(Y=Y, A=A, X=X, filter.hyper = list(lambda="lambda.1se"),
ple.hyper = list(min.node.pct=0.05),
submod.hyper = list(minsize=200), verbose=FALSE)
plot(res_new_hyper)
```

# Example: Binary Outcome with Binary Treatment

Consider a binary outcome (ex: % overall response rate) with a binary treatment (study drug vs standard of care). The estimand of interest is the risk difference, \(\theta_0 = E(Y|A=1)-E(Y|A=0)\). Similar to the continous example, we simulate binomial data where roughly 30% of the patients receive no treatment-benefit for using \(A=1\) vs \(A=0\). Responders vs non-responders are defined by the continuous predictive covariates \(X_1\) and \(X_2\) for a total of four subgroups. Subgroup treatment effects are: \(\theta_{1} = 0\) (\(X_1 \leq 0, X_2 \leq 0\)), \(\theta_{2} = 0.11 (X_1 > 0, X_2 \leq 0)\), \(\theta_{3} = 0.21 (X_1 \leq 0, X2 > 0\)), \(\theta_{4} = 0.31 (X_1>0, X_2>0)\).

For binary outcomes (Y=0,1), the default settings use glmnet to filter (“filter_glmnet”), randomforest patient-level estimates (“ple_ranger”; for binary, the output is the risk difference), “submod_glmtree”" (GLM MOB with binomial(link=identity)) for subgroup identification, and param_ple (average counterfactual risk differences within each subgroup; same formulas as continuous setting).

```
dat_bin = generate_subgrp_data(family="binomial", seed = 5558)
Y = dat_bin$Y
X = dat_bin$X # 50 covariates, 46 are noise variables, X1 and X2 are truly predictive
A = dat_bin$A # binary treatment, 1:1 randomized
res0 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_glmtree
#> Parameter Estimation: param_ple
summary(res0)
#> $`PRISM Configuration`
#> [1] "filter_glmnet => ple_ranger => submod_glmtree => param_ple"
#>
#> $`Variables that Pass Filter`
#> [1] "X1" "X2" "X3" "X5" "X7" "X9" "X15" "X16" "X17" "X19" "X21"
#> [12] "X28" "X31" "X34" "X35" "X38" "X45"
#>
#> $`Number of Identified Subgroups`
#> [1] 5
#>
#> $`Variables that Define the Subgroups`
#> [1] "X1, X2, X5, X3"
#>
#> $`Parameter Estimates`
#> Subgrps N estimand est SE alpha CI
#> 1 0 800 E(Y|A=0) 0.3351 0.0198 0.05 [0.2963,0.3739]
#> 4 4 86 E(Y|A=0) 0.1739 0.0336 0.05 [0.107,0.2408]
#> 7 5 199 E(Y|A=0) 0.2102 0.0317 0.05 [0.1476,0.2727]
#> 10 6 156 E(Y|A=0) 0.3633 0.0499 0.05 [0.2648,0.4619]
#> 13 8 128 E(Y|A=0) 0.3431 0.0465 0.05 [0.251,0.4351]
#> 16 9 231 E(Y|A=0) 0.4793 0.0402 0.05 [0.4002,0.5585]
#> 2 0 800 E(Y|A=1) 0.4900 0.0214 0.05 [0.448,0.532]
#> 5 4 86 E(Y|A=1) 0.2469 0.0424 0.05 [0.1626,0.3311]
#> 8 5 199 E(Y|A=1) 0.3575 0.0415 0.05 [0.2757,0.4394]
#> 11 6 156 E(Y|A=1) 0.4612 0.0481 0.05 [0.3662,0.5561]
#> 14 8 128 E(Y|A=1) 0.5856 0.0552 0.05 [0.4765,0.6948]
#> 17 9 231 E(Y|A=1) 0.6611 0.0376 0.05 [0.587,0.7351]
#> 3 0 800 E(Y|A=1)-E(Y|A=0) 0.1549 0.0274 0.05 [0.1011,0.2087]
#> 6 4 86 E(Y|A=1)-E(Y|A=0) 0.0730 0.0557 0.05 [-0.0379,0.1838]
#> 9 5 199 E(Y|A=1)-E(Y|A=0) 0.1474 0.0511 0.05 [0.0467,0.2481]
#> 12 6 156 E(Y|A=1)-E(Y|A=0) 0.0978 0.0683 0.05 [-0.0372,0.2328]
#> 15 8 128 E(Y|A=1)-E(Y|A=0) 0.2426 0.0704 0.05 [0.1033,0.3819]
#> 18 9 231 E(Y|A=1)-E(Y|A=0) 0.1817 0.0540 0.05 [0.0754,0.2881]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res0)
```

Survival outcomes are also allowed in PRISM. The default settings use glmnet to filter (“filter_glmnet”), ranger patient-level estimates (“ple_ranger”; for survival, the output is the restricted mean survival time treatment difference), “submod_weibull”" (MOB with weibull loss function) for subgroup identification, and param_cox (subgroup-specific cox regression models). Another subgroup option is to use “submod_ctree”“, which uses the conditional inference tree (ctree) algorithm to find subgroups; this looks for partitions irrespective of treatment assignment and thus corresponds to finding prognostic effects.

```
# Load TH.data (no treatment; generate treatment randomly to simulate null effect) ##
data("GBSG2", package = "TH.data")
surv.dat = GBSG2
# Design Matrices ###
Y = with(surv.dat, Surv(time, cens))
X = surv.dat[,!(colnames(surv.dat) %in% c("time", "cens")) ]
set.seed(6345)
A = rbinom( n = dim(X)[1], size=1, prob=0.5 )
# Default: filter_glmnet ==> ple_ranger (estimates patient-level RMST(1 vs 0) ==> submod_weibull (MOB with Weibull) ==> param_cox (Cox regression)
res_weibull1 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_weibull
#> Parameter Estimation: param_cox
plot(res_weibull1, type="PLE:waterfall")
```

`plot(res_weibull1)`

```
# PRISM: filter_glmnet ==> submod_ctree ==> param_cox (Cox regression) #
res_ctree1 = PRISM(Y=Y, A=A, X=X, submod = "submod_ctree")
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_ctree
#> Parameter Estimation: param_cox
plot(res_ctree1)
```

Resampling methods are also a feature in PRISM. Bootstrap (resample=“Bootstrap”), permutation (resample=“Permutation”), and cross-validation (resample=“CV”) based-resampling are included. Resampling can be used for obtaining de-biased or “honest” subgroup estimates, inference, and/or probability statements. For each resampling method, the sampling mechanism can be stratified by the discovered subgroups (default: stratify=TRUE). To summarize:

**Bootstrap Resampling**

Given observed data \((Y, A, X)\), fit \(PRISM(Y,A,X)\). Based on the identified \(k=1,..,K\) subgroups, output subgroup assignment for each patient. For the overall population and each subgroup (\(k=0,...,K\)), store the associated parameter estimates (\(\hat{\theta}_{k}\)). For \(r=1,..,R\) resamples with replacement (\((Y_r, A_r, X_r)\)), fit \(PRISM(Y_r, A_r, X_r)\) and obtain new subgroup assignments \(k_r=1,..,K_r\) with associated parameter estimates \(\hat{\theta}_{k_r}\). For resample \(r\), the bootstrap estimates and SEs for the original identified subgroups (\(k=0,...,K\)) are calculated respectively as: \[ \hat{\theta}_{rk} = \sum_{k_r} w_{k_r} \hat{\theta_{k_r}} \] \[ SE(\hat{\theta})_{rk} = \sqrt{ \sum_{k_r} w_{k_r}^2 SE(\hat{\theta}_{k_r})^2 } \] where \(w_{k_r} = n(k \cap k_r)/ \sum_{k_r} n(k \cap k_r)\) and \(n(k \cap k_r)\) is the number of subjects in the original subgroup \(k\) who are also in the bootstrap subgroup \(k_r\). The bootstrap smoothed estimate and standard error, as well as probability statements, are calculated as: \[ \tilde{\theta}_{k} = \frac{1}{R} \sum_r \hat{\theta}_{rk} \] \[ SE(\hat{\theta}_{k})_B = \sqrt{ \frac{1}{R} \sum_r (\hat{\theta}_{rk}-\tilde{\theta}_{k})^2 } \] \[ \hat{P}(\hat{\theta}_{k}>c) = \frac{1}{R} \sum_r I(\hat{\theta}_{rk}>c) \]

Bootstrap confidence intervals can then be formed, \(\left[\hat{\theta}_{k} \pm 1.96*SE(\hat{\theta}_k)_B \right]\). Bootstrap calibration, which uses the observed point-estimates and SEs, but adjusts the alpha level such that we obtain on average \(1-\alpha\) coverage across all identified subgroups, is also implemented for PRISM (default: calibrate=FALSE). See Loh et al 2016 (GUIDE) for more details. Importantly, for calibration to be effective,\(\hat{\theta}_{k}\) should be relatively unbiased. Another approach is to use the bootstrap smoothed estimates, \(\tilde{\theta}_{k}\), along with percentile-based CIs (i.e. 2.5,97.5 quantiles of bootstrap distribution). Other metrics are also automatically calculated, such as bootstrap bias (can be used to adjust initial subgroup estimates).

Returning to the survival example, we now re-run PRISM with 50 bootstrap resamples (for increased accuracy, use >1000). The smoothed bootstrap estimates, bootstrap standard errors, bootstrap bias, percentile CI, and calibrated CI correspond to “est_resamp”, “SE_resamp”, “bias.boot”, “LCL.pct”/“UCL.pct”, and “LCL.calib”/“UCL.calib” respectively. We can also plot a density plot of the bootstrap distributions through the plot(…,type=“resample”) option.

```
res_boot = PRISM(Y=Y, A=A, X=X, resample = "Bootstrap", R=50, ple = "None")
# Plot of distributions #
plot(res_boot, type="resample", estimand = "HR(A=1 vs A=0)")+geom_vline(xintercept = 1)
```

**Permutation Resampling**

Permutation resampling (resample=“Permutation”) follows the same general procedure as bootstrap resampling. The main difference is that we only randomly shuffle the treatment assignment \(A\) without replacement. This simulates the null hypothesis of no treatment. A key output is the permutation p-values (pval_perm in param.dat) and the permutation resampling distributions.

**Cross-Validation**

Cross-validation resampling (resample=“CV”) also follows the same general procedure as bootstrap resampling. Given observed data \((Y, A, X)\), fit \(PRISM(Y,A,X)\). Based on the identified \(k=1,..,K\) subgroups, output subgroup assignment for each patient. Next, split the data into \(R\) folds (ex: 5). For fold \(r\) with sample size \(n_r\), fit PRISM on \((Y[-r],A[-r], X[-r])\) and predict the patient-level estimates and subgroup assignments (\(k_r=1,...,K_r\)) for patients in fold \(r\). The data in fold \(r\) is then used to obtain parameter estimates for each subgroup, \(\hat{\theta}_{k_r}\). For fold \(r\), estimates and SEs for the original subgroups (\(k=1,...,K\)) are then obtained using the same formula as with bootstrap resampling, again, denoted as (\(\hat{\theta}_{rk}\), \(SE(\hat{\theta}_{rk})\)). This is repeated for each fold and “CV” estimates and SEs are calculated for each identified subgroup. Let \(w_r = n_r / \sum_r n_r\), then:

\[ \hat{\theta}_{k,CV} = \sum w_r * \hat{\theta}_{rk} \] \[ SE(\hat{\theta}_k)_{CV} = \sqrt{ \sum_{r} w_{r}^2 SE(\hat{\theta}_{rk})^2 }\] CV-based confidence intervals can then be formed, \(\left[\hat{\theta}_{k,CV} \pm 1.96*SE(\hat{\theta}_k)_{CV} \right]\).

Overall, PRISM is a flexible algorithm that can aid in subgroup detection and exploration of heterogeneous treatment effects. Each step of PRISM is customizable, allowing for fast experimentation and improvement of individual steps. More details on creating user-specific models can be found in the “User_Specific_Models_PRISM” vignette User_Specific_Models. The StratifiedMedicine R package and PRISM will be continually updated and improved. User-feedback will further faciliate improvements.