1

I set up a multilevel model in R with the lme4-package to test different effects on social participation in primary school classes. Now I assume that the effects of academic performance (sp) and emotional-social competence (es) vary between classes, which is why I set up a model with random effects. Now I want to test those random slopes. This is my model (don't wonder, In my model, other variables - such as gender, migration background and the teacher's attitude - are included):

mod3 <- lmer(
  social_t1 ~ 1 + sex + mig + sl.c + es.c + attitude + (1 + sl.c + es.c|class), 
  data = kommschreib_ak, 
  na.action = na.exclude
)
summary(mod3)

The problem is that R only outputs the variances and not the p-value, which I want to have.

I have already found out how I can test the two slopes individually in a model (via anova-test), but not together in one. So I want to have a significance test that gives me a p-value for both academic performance and emotional-social competence for my model, which contains both random slopes.

Is there any chance I can do it in R?

My colleague has done this in MPlus so far. And since we assume that the random effects influence each other, we get different results when I test the effects individually.

0

1 Answer 1

2

To test the random variation in sl.c and es.c jointly I think you want to fit a model that drops both of these terms (retaining only the intercept effect), and then use anova() to compare it to the full model. Most easily done using update():

mod3B <- update(mod3,
  . ~ . - (1 + sl.c + es.c|class) + (1|class)
) 
anova(mod3, mod3B)

Note that the likelihood ratio test (which is what anova() does) is technically not applicable [it gives strongly conservative results] when testing hypotheses corresponding to parameters on the boundary of the feasible set (i.e., the variances can be zero, but they can't be negative). The RLRsim package is designed for these kinds of tests.

If you want to test the significance of individual random effects (not jointly), then the recipe is similar: drop them one at a time and run anova(), e.g.

## test null hypothesis Var(es.c) == 0
mod3C <- update(mod3,
  . ~ . - (1 + sl.c + es.c|class) + (1 + sl.c|class)
) 
anova(mod, mod3C)
## test null hypothesis Var(sl.c) == 0
mod3D <- update(mod3,
  . ~ . - (1 + sl.c + es.c|class) + (1 + es.c|class)
) 
anova(mod, mod3D)

This has the same caveats as before (i.e., the likelihood ratio test is not really appropriate here): see also the relevant section of the GLMM FAQ.

You can do this more directly with lmerTest::ranova().

Sign up to request clarification or add additional context in comments.

1 Comment

That is not really what I was looking for. When I build my model, R gives a p-value for each fixed effect. And now I am looking for a smiliar way to test each random effect at the same time (so that I know, for example, emotional-social competence as a random effect is significant but academic performance in the same model ist not significant as a random effect.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.