Technical Note on the Baseline Regression Implementation

The Regression Model

The general regression problem is written as,

\[\mathbf{y} = \mathbf{X}\beta + \epsilon\text{,}\]

where \(\mathbf{y}\) is the length \(n\) vector of observations, \(\beta\) is the length \(m\) vector of predictor coefficients, \(\mathbf{X}\) is the \(n\times m\) matrix of predictors, and \(\epsilon\) are the residuals. The goal of the regression is to find the values of \(\beta\) which minimize the quantity

\[(\mathbf{y} - \mathbf{X}\beta)^T\mathbf{\Omega}^{-1}(\mathbf{y} -\mathbf{X}\beta)\text{,}\]

where \(\mathbf{\Omega}\) is the covariance of the observations. The problem admits a direct solution,

\[\beta = (\mathbf{X}^T \Omega^{-1} \mathbf{X})^{-1} \mathbf{X}^T \mathbf{\Omega}^{-1} \mathbf{y}\text{,}\]

which can also be used to obtain an error estimate of \(\beta\), assuming \(\mathbf{\Omega}\) is correctly specified.

The regression is performed in an iterative procedure (Cochrane and Orcutt, 1949) with \(\mathbf{\Omega}\) set to unity for the first iteration. The first iteration is equivalent to unweighted ordinary least squares. After the first iteration the autocorrelation coefficient, \(\rho\), is calculated through,

\[\rho = \frac{\sum_{i=2}^{N} (\epsilon_i - \overline{\epsilon})(\epsilon_{i-1} - \overline{\epsilon})}{\sum_{i=1}^N (\epsilon_i - \overline{\epsilon})^2}\text{,}\]

where \(\overline{\epsilon}\) is the mean value of the residuals. Typical values for the autocorrelation coefficient are \(\sim 0.2-0.3\). For the next iteration the covariance matrix is modified taking into account the autocorrelation (Prais and Winsten, 1954),

\[\begin{split}\mathbf{\Omega} = \begin{bmatrix}\frac{1}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \frac{\rho^2}{1-\rho^2} & \cdots &\frac{\rho^{n-1}}{1-\rho^2} \\[8pt]\frac{\rho}{1-\rho^2} & \frac{1}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \cdots & \frac{\rho^{n-2}}{1-\rho^2} \\[8pt]\frac{\rho^2}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \frac{1}{1-\rho^2} & \cdots & \frac{\rho^{n-3}}{1-\rho^2} \\[8pt]\vdots & \vdots & \vdots & \ddots & \vdots \\[8pt]\frac{\rho^{n-1}}{1-\rho^2} & \frac{\rho^{n-2}}{1-\rho^2} & \frac{\rho^{n-3}}{1-\rho^2} & \cdots & \frac{1}{1-\rho^2}\end{bmatrix}\text{.}\end{split}\]

The covariance matrix is modified accordingly to account for measurement gaps (Savin and White, 1978). This procedure is repeated until the autocorrelation coefficient has converged within some tolerance level. The final error estimate is calculated by scaling \(\mathbf{\Omega}\) to match the observed variance of the residuals.

Predictors/Configurations

There are three standard configurations, the piecewise linear trend (PWLT), independent linear trend (ILT) and the EESC trend configuration. The regression procedure is the same for all configurations, the only difference is the predictors used in the model. Common to all of these baseline regression setups are a set of predictors intended to account for the natural variability of the measurements.

For convenience all of these predictors have been scaled to have mean 0 and standard deviation 1.

Default Predictors Figure 1 Standard predictors uses in all regression schemes.

References

Cochrane, D., & Orcutt, G. H. (1949). Application of least squares regression to relationships containing auto-correlated error terms. Journal of the American statistical association, 44(245), 32-61.

Prais, S. J., & Winsten, C. B. (1954). Trend estimators and serial correlation (Vol. 383, pp. 1-26). Chicago: Cowles Commission discussion paper.

Savin, N. E., & White, K. J. (1978). Testing for Autocorrelation with Missing observations. Econometrica (Pre-1986), 46(1), 59.

[ ]: