Monday, January 2, 2012

Random Regression in JAGS

Update: modified the overparameterized version to center on the mean of the random effects, rather than using the overall intercept.  That works even with multiple random effects.  It still doesn't match exactly what Andrew Gelman suggests in HMLM but I haven't had the chance to compare carefully.

Update: edited code, following comment about error.






I think the BUGS language really needs a book on best practices.  For example the obvious approach to generic random regression BUGS code fails miserably.  Before I show what I mean, let's make some data:

library(rjags)
library(ggplot2)

# Set up data.
input <- within(
    data = list(),
    expr = {
        beta_0 <- 5.222
        beta_x1 <- 2.93
        nGroups <- 10
        maxGroupSize <- 20
        beta_group_sd <- 1.3
        beta_group <- rnorm(
            n=nGroups,
            mean=0,
            sd=beta_group_sd
        )
        group_size <- rbinom(
            n=nGroups,
            size=maxGroupSize,
            prob=0.4
        )
        group <- unlist(
            sapply(
                X=1:nGroups,
                FUN=function(x,group_size) {
                    rep(x,times=group_size[x])
                },
                group_size=group_size,
                simplify=TRUE
            )
        )
        nRows <- sum(group_size)
        x1 <- runif(n=nRows, min = -2, max = 2)
        y <- vector(mode='numeric', length=nRows)
        error <- rnorm(n=nRows, mean = 0, sd = 0.4)
        for (r in 1:nRows) {
            y[r] <-
                beta_0 +
                beta_x1*x1[r] +
                beta_group[group[r]] +
                error[r]
        }
    }
)

input_clean <- input[c('nGroups','group','nRows','y','x1')]


To me the obvious way to do random regression in a situation where the model might change over time (say you do multiple rounds of model checking and development in the approach suggested by Andrew Gelman, among others) you would like to be able to write the equation for /[y[r]/] much as above in the simulation code, and then add further restrictions on groups of coefficients.

Here's the code for that model:

td <- tempdir()
model_file <- tempfile(
    pattern = 'random-regression',
    tmpdir = td,
    fileext = '.bugs'
)
cat('
model {
    # Linear model:
    for ( i in 1:nRows ) {
        mu[i] <- beta_0 + beta_x1 * x1[i] + beta_group[group[i]]
        y[i] ~ dnorm(mu[i],precision)
    }
    # Parameters:
    beta_0 ~ dnorm(0,0.1)
    beta_x1 ~ dnorm(0,0.1)
    for ( g in 1:nGroups ) {
        beta_group[g] ~ dnorm(0,beta_group_prec)
    }
    beta_group_prec <- 1/(beta_group_sd^2)
    beta_group_sd ~ dunif(0,5)

    # Error:
    precision <- 1/error_sd^2
    error_sd ~ dunif(0,5)

}
', file = model_file
)


If you do take this approach, you end up with high autocorrelation in posterior samples. It's (sometimes) ok with just a single random effect with few groups. In many small data-sets this can be brute-forced by thinning. As I recall the specific issue is that all the random effects end up negatively correlated with the overall mean. If you figure that out, and maybe run into the section on overparmeterized models in MLHM (Gelman) or this set of lecture notes (Jackman) you might come up with an over-parameterized solution:

td <- tempdir()
model_file <- tempfile(
    pattern = 'random-regression',
    tmpdir = td,
    fileext = '.bugs'
)
cat('
model {
    # Linear model:
    for ( i in 1:nRows ) {
        mu[i] <- beta_0 + beta_x1 * x1[i] + beta_group[group[i]]
        y[i] ~ dnorm(mu[i],precision)
    }
    # Parameters:
    beta_0 ~ dnorm(0,0.1)
    beta_x1 ~ dnorm(0,0.1)
    for ( g in 1:nGroups ) {
        beta_group_unhooked[g] ~ dnorm(0,beta_group_prec)
        beta_group[g] <- beta_group_unhooked[g] - mean(beta_group_unhooked[])
    }
    beta_group_prec <- 1/(beta_group_sd^2)
    beta_group_sd ~ dunif(0,5)

    # Error:
    precision <- 1/error_sd^2
    error_sd ~ dunif(0,5)

}
', file = model_file
)



Here's the code to run both and you can see the second version behaves much better by plotting some traces.

# Run model.
model <- jags.model(
    file = model_file,
    data = input_clean,
    n.chains = 5,
    n.adapt = 2000
)

samp <- jags.samples(
    model = model,
    variable.names = c(
        'beta_0',
        'beta_x1',
        'beta_group',
        'beta_group_sd',
        'error_sd'
    ),
    n.iter = 1000,
    thin = 1
)

for (nom in names(samp)) {
    print(nom);
    print(gelman.diag(samp[[nom]]))


Martyn Plummer, Andrew Gelman, and others have commented in various places that it would be nice to get JAGS/OpenBUGS to recognize this sort of thing and do it automatically but there are many situations where you can only get good results from BUGS code by playing to the strengths of the system. It's probably not worth writing a book on best practices but it sure is worth a few blog posts.  I also wonder if JAGS couldn't do much better in situations like this with minimal additional code if it didn't allow for the user to code hints for the compiler about which nodes are correlated.



2 comments:

  1. Shouldn't the line

    beta_group[g] <- beta_group_unhooked[g] - mean(beta_group_unhooked[g])

    be replaced with

    beta_group[g] <- beta_group_unhooked[g] - mean(beta_group_unhooked[1:nGroups])

    ?

    ReplyDelete
    Replies
    1. Yes, thanks. That's what I get for not re-checking the code after the edit. In subsequent JAGS versions I've gotten ambiguous comparisons in more complex models between the two methods so I'm not sure what relevance this has anymore.

      Delete