# ILSE: simulated examples

#### 2022-01-31

The package can be loaded with the command:

library("ILSE")

# ILSE can handle (non)missing data with continuous variables

First, we generate a small simulated data.

set.seed(1)
n <- 100
p <- 6
X <- MASS::mvrnorm(n, rep(0, p), cor.mat(p, rho=0.5))
beta0 <- rep(c(1,-1), times=3)
Y <- -2+ X %*% beta0 + rnorm(n, sd=1)

## A special case: without missing values

Then, we fit the linear regression model without missing values based on ILSE.

We can also create a (data.frame) object as input for ILSE.

Check the significant variables by bootstratp.

## Handle data with missing values

First, we randomly remove some entries in X.

Second, we use lm to fit linear regression model based on complete cases, i.e., CC analysis. We can not detect any siginificant covariates.

Third, we use ILSE to fit the linear regression model based on all data. We can fit a linear regression model without intercept by setting formula:

Then, we fit a linear regression model with intercept by following command

Fourth, Bootstrap is applied to evaluate the standard error and p-values of each coefficients estimated by ILSE. We observe four significant coefficients.

In ILSE package, we also provide Full Information Maximum Likelihood for Linear Regression fimlreg. We show how to use it to handle the above missing data.

We also use bootstrap to evaluate the standard error and p-values of each coefficients estimated by ILSE. We observe only one significant coefficients.

## Visualization

We visualize the p-vaules of each methods , where red line denotes 0.05 in y-axis and blue line 0.1 in y-axis.

# ILSE can handle missing data with continuos and categorical variables

Base on the above data, we add a new column, a categorical variable (Sex), into the data.frame. This variable is not associated with the outcome variable.

dat <- data.frame(Y=Y, X=Xmis)
dat$Sex <- factor(rep(c('male', 'female'), times=n/2)) dat$Sex[sample(1:n, n*mis_rate)] <- NA
ilse1 <- ilse(Y~., data=dat, verbose = T)

We can change the bootstrap times in calculate the standard errors, Z value and p-values of coefficients.

s3 <- summary(ilse1, Nbt=40)
s3

# ILSE can correctly identify the important variables

## generate data

First, we generate data from a linear regression model with three inportant variables(1,3,5) and three unimportant variables(2,4,6).

We randomly assign missing values in the design matrix.

Next, we use ILSE to fit model.

Fit model by using lm and FIML, finally compare ILSE with these two methods.

## Visualization

We visualize the p-vaules of each methods , where red line denotes 0.05 in y-axis. Under significance level 0.05, we found both ILSE and FIML can identify all important variables (X1, X3 and X5), while CC method only identified X1 and X5.

# ILSE can handle data with high missing rate

Here, we generate a data with 80% missing values, then use ILSE to fit model.

# generate data from linear model
set.seed(10)
n <- 100
p <- 6
X <- MASS::mvrnorm(n, rep(0, p), cor.mat(p, rho=0.5))
beta0 <- rep(c(1,-1), times=3)
Y <- -2+ X %*% beta0 + rnorm(n, sd=1)

# generate missing values
mis_rate <- 0.8
set.seed(1)
na_id <- sample(1:(n*p), n*p*mis_rate)
Xmis <- X
Xmis[na_id] <- NA
# retain 4 complete cases.
Xmis[1:4,] <- X[1:4, ]
sum(complete.cases(Xmis))

CC method will failed.

lm1 <- lm(Y~Xmis)
summary.lm(lm1)

However, ILSE can still work.

ilse2 <- ilse(Y~Xmis, verbose = T)
s2 <- summary(ilse2)
s2

# ILSE can handle large-scale data

We generate a large-scale data with n=1000 and p = 50

n <- 1000
p <- 50
X <- MASS::mvrnorm(n, rep(0, p), cor.mat(p, rho=0.5))
beta0 <- rep(c(1,-1), length=p)
Y <- -2+ X %*% beta0 + rnorm(n, sd=1)

mis_rate <- 0.3
set.seed(1)
na_id <- sample(1:(n*p), n*p*mis_rate)
Xmis <- X
Xmis[na_id] <- NA

Xmis[1:10,] <- X[1:10,]
lm1 <- lm(Y~Xmis)
lm1
system.time(ilse2 <- ilse(Y~Xmis, data=NULL, verbose=T))