Is it possible to obtain means and standard deviations using -margins- after a -mixed- model that only includes a random y-intercept term to accommodate repeated observations within subject, over time? Assume no random slope terms here--just an intercept to adjust for the within-subjects design.
For example:
y = outcome (continuous/normal, collected at 2 or more time periods)
group = indicator variable for group (example: 2 group design)
time = indicator variable for time (exanmple: 3 times)
ID = variable containing subject IDs
mixed y i.group##i.time,||ID:,vce(repeated) reml
Assuming that the group#time term is significant... I would typically follow this by:
-margins group#time
to get a table of estimated marginal means, delta-se, and 95% CI's, and I typically graph/publish data as mean(+/- 95% CI's)
(I might also do some pairwise contrasts, but my question is not about that...)
I rec'd an email from a researcher who is attempting to extract means and sd's from one of my pubs for the purpose of conducting a meta analysis, and I'm not up to speed on current thinking of converting delta-method SE's in a mixed model like this (random y-intercept only) to SDs. My initial instinct is to just tally up the n-per time per group (can differ if any missing data over time) and use the typical SD = SE*sqrt(n), but I wanted to verify with colleagues here on the appropriateness (or not) of that approach?
(1) is SD = SE*sqrt(n) appropriate in this context?
(2) is there a way to automate this using -margins- or some other command?
Much obliged for your assistance.
For example:
y = outcome (continuous/normal, collected at 2 or more time periods)
group = indicator variable for group (example: 2 group design)
time = indicator variable for time (exanmple: 3 times)
ID = variable containing subject IDs
mixed y i.group##i.time,||ID:,vce(repeated) reml
Assuming that the group#time term is significant... I would typically follow this by:
-margins group#time
to get a table of estimated marginal means, delta-se, and 95% CI's, and I typically graph/publish data as mean(+/- 95% CI's)
(I might also do some pairwise contrasts, but my question is not about that...)
I rec'd an email from a researcher who is attempting to extract means and sd's from one of my pubs for the purpose of conducting a meta analysis, and I'm not up to speed on current thinking of converting delta-method SE's in a mixed model like this (random y-intercept only) to SDs. My initial instinct is to just tally up the n-per time per group (can differ if any missing data over time) and use the typical SD = SE*sqrt(n), but I wanted to verify with colleagues here on the appropriateness (or not) of that approach?
(1) is SD = SE*sqrt(n) appropriate in this context?
(2) is there a way to automate this using -margins- or some other command?
Much obliged for your assistance.
Comment