**mecor** is an R package for Measurement Error CORrection. **mecor** implements measurement error correction methods for linear models with continuous outcomes. The measurement error can either occur in a continuous covariate or in the continuous outcome. This vignette discusses how a sensitivity analysis for random covariate measurement error is conducted in **mecor**.

*Regression calibration* is one of the most popular measurement error correction methods for covariate measurement error. This vignette shows how *regression calibration* is used in **mecor** to correct for random measurement error in a covariate. Our interest lies in estimating the association between a continuous reference exposure \(X\) and a continuous outcome \(Y\), given covariates \(Z\). Instead of \(X\), the substitute error-prone exposure \(X^*\) is measured, assumed with random measurement error. It is further assumed that there is no extra information available to quantify the random measurement error in \(X^*\). The input for our measurement error correction therefore is constrained to informed guesses about the size of the random measurement error. Literature or expert knowledge could be used to inform these guesses. We refer to the vignettes discussing e.g. *standard regression calibration* for random measurement error correction when validation data is available.

We assume that \(X^*\) is measured with random measurement error. This means that we assume that \(X^* = X + U\), where \(U\) has mean 0 and variance \(\tau^2\). More specifically, we assume non-differential random measurement error, i.e. \(X^*|X\) is independent of \(Y\) (our outcome).

The object `MeasErrorRandom()`

in **mecor** is used for random measurement error correction in a covariate. We explain the usage of the `MeasErrorRandom()`

object in the following. We first introduce the simulated data set `icvs`

. The simulated data set `icvs`

is an internal covariate-validation study. We will use this data set to explore random measurement error correction without using the reference measure \(X\) that is available in the data set. The data set `icvs`

contains 1000 observations of the outcome \(Y\), the error-prone exposure \(X^*\) and the covariate \(Z\). The reference exposure \(X\) is observed in approximately 25% of the individuals in the study, but will be ignored.

```
# load internal covariate validation study
data("icvs", package = "mecor")
head(icvs)
#> Y X_star Z X
#> 1 -3.473164 -0.2287010 -1.5858049 NA
#> 2 -3.327934 -1.3320494 -0.6077454 NA
#> 3 1.314735 2.0305727 0.4461727 2.2256377
#> 4 1.328727 0.3027101 0.1739813 NA
#> 5 1.240446 -0.8465389 1.5480392 -0.7521792
#> 6 3.183868 0.1081888 1.1230232 NA
```

When ignoring the measurement error in \(X^*\), one would naively regress \(X^*\) and \(Z\) on \(Y\). This results in a biased estimation of the exposure-outcome association:

```
# naive estimate of the exposure-outcome association
lm(Y ~ X_star + Z, data = icvs)
#>
#> Call:
#> lm(formula = Y ~ X_star + Z, data = icvs)
#>
#> Coefficients:
#> (Intercept) X_star Z
#> -0.03947 0.41372 2.08457
```

Suppose that \(X\) is not observed in the internal covariate-validation study `icvs`

. To correct the bias in the naive association between exposure \(X^*\) and outcome \(Y\) given \(Z\), we need to make an informed guess about the quantity of \(\tau^2\). Suppose we assume \(\tau^2 = 0.25\). One can proceed as follows using `mecor()`

:

```
# Use MeasErrorRandom for measurement error correction:
mecor(Y ~ MeasErrorRandom(substitute = X_star, variance = 0.25) + Z,
data = icvs)
#>
#> Call:
#> mecor(formula = Y ~ MeasErrorRandom(substitute = X_star, variance = 0.25) +
#> Z, data = icvs)
#>
#> Coefficients Corrected Model:
#> (Intercept) cor_X_star Z
#> -0.03244702 0.50953290 1.98557861
#>
#> Coefficients Uncorrected Model:
#> (Intercept) X_star Z
#> -0.03946702 0.41371614 2.08457045
```

To correct for the random measurement error in \(X^*\), **mecor** constructs the calibration model matrix as follows:

```
# First, construct the variance--covariance matrix of X_star and Z:
# ( Var(X_star) Cov(X_star, Z)
# Cov(Z,X_star) Var(Z) )
# To do so, we design Q, a matrix with 1000 rows (number of observations) and 2
# columns. The first column of Q contains all 1000 observations of X_star, each
# minus the mean of X_star. The second column of Q contains all 1000 obervations
# of Z, each minus the mean of Z.
Q <- scale(cbind(icvs$X_star, icvs$Z), scale = F)
# Subsequently, the variance--covariance matrix of X_star and Z is constructed:
matrix <- t(Q) %*% Q / (length(icvs$X_star) - 1)
# Then, the variance--covariance matrix of X and Z is constructed, by using:
# Var(X) = Var(X_star) - Var(U) <--- Var(U) is the assumed tau^2
# Cov(X, Z) = Cov(X_star, Z) <--- since U is assumed independent of Z
matrix1 <- matrix
matrix1[1, 1] <- matrix1[1, 1] - 0.25 # tau^2 = 0.25
# Rosner et al. (1992) show that the calibration model matrix can be constructed
# by taking the inverse of the variance--covariance matrix of X and Z and by
# matrix multiplying that matrix with the variance--covariance matrix of X_star
# and Z.
model_matrix <- solve(matrix1) %*% matrix
model_matrix
#> [,1] [,2]
#> [1,] 1.2316002 1.110223e-16
#> [2,] -0.2392748 1.000000e+00
matrix1 %*% solve(matrix)
#> [,1] [,2]
#> [1,] 8.119518e-01 0.1942796
#> [2,] 1.110223e-16 1.0000000
# The resulting matrix is now:
# (1/lambda1 0
# -lambda2/lambda1 1)
# Where,
# lambda1 = Cov(X,X_star|Z) / Var(X_star|Z)
# lambda2 = Cov(X,Z|X_star) / Var(Z|X_star)
# Or, more familiar, the calibration model,
# E[X|X_star, Z] = lambda0 + lambda1 * X_star + lambda2 * Z
lambda1 <- 1 / model_matrix[1, 1]
lambda2 <- model_matrix[2,1] * -lambda1
# From standard theory, we have,
# lambda0 = mean(X) - lambda1 * mean(X_star) - lambda2 * mean(Z)
# mean(X) = mean(X_star) since we assume random measurement error
lambda0 <- mean(icvs$X_star) - lambda1 * mean(icvs$X_star) - lambda2 * mean(icvs$Z)
# The calibration model matrix Lambda is defined as:
# (lambda1 lambda0 lambda2
# 0 1 0
# 0 0 1)
model_matrix <- diag(3)
model_matrix[1, 1:3] <- c(lambda1, lambda0, lambda2)
model_matrix
#> [,1] [,2] [,3]
#> [1,] 0.8119518 -0.01377731 0.1942796
#> [2,] 0.0000000 1.00000000 0.0000000
#> [3,] 0.0000000 0.00000000 1.0000000
# The calibration model matrix is standard output of mecor, and can be found
# using:
mecor_fit <- mecor(Y ~ MeasErrorRandom(X_star, 0.25) + Z,
data = icvs)
mecor_fit$corfit$matrix
#> Lambda1 Lambda0 Lambda3
#> Lambda1 0.8119518 -0.01377731 0.1942796
#> Lambda0 0.0000000 1.00000000 0.0000000
#> Lambda3 0.0000000 0.00000000 1.0000000
```

Subsequently, the naive estimates of the outcome model are multiplied by the inverse of the calibration model matrix to obtain corrected estimates of the outcome model.

```
# Fit naive outcome model
naive_fit <- lm(Y ~ X_star + Z,
data = icvs)
# Save coefficients
beta_star <- naive_fit$coefficients
# To prepare the coefficients for the measurement error correction, exchange the
# intercept and the coefficient for X_star
beta_star[1:2] <- rev(beta_star[1:2])
# Perform the measurement error correction:
beta <- beta_star %*% solve(model_matrix)
# Reverse the order
beta[1:2] <- rev(beta[1:2])
beta # corrected coefficients of the outcome model
#> [,1] [,2] [,3]
#> [1,] -0.03244702 0.5095329 1.985579
```

Which exactly matches the output of `mecor()`

above.