Hello,
My research team is running some multi-level models to examine the associations between measures of child care center quality and child outcomes.
The generic form of the model is shown below. Spring_score is the child's assessment score in the spring, e.g. Woodcock Johnson Tests of Achievement. We control for the fall score, child covariates, and site-level covariates in the model, and treat the measure of site quality as our variable of interest.
xtset siteid
xtreg spring_score site_quality fall_score student_covariates site_covariates, re vce(robust)
Is there any way that I can generate standardized regression coefficients from the model? That would be our preference, but we haven't been able to figure out how to do that. If generating standardized coefficients is not an option, what other choices do we have for examining results for multiple outcomes that have different scales?
For now we are generating predicted values for the spring test score and calculating the means for groups defined by the values on the site quality variable (which is a dummy). Our code for that is as follows.
predict yhat_spring_score
putexcel A3=("Adjusted Means") B3=("N") c3=("Mean") D3=("SD") using "directory and file name", sheet(sheet name, replace) modify
sum yhat_spring_score if site_quality==1
putexcel A4=("site_quality = 1") B4=(r(N)) C4=(r(mean)) D4=(r(sd)) using "directory and file name", sheet(sheet name, replace) modify
sum yhat_spring_score if site_quality'==0
putexcel A5=("site_quality = 0") B5=(r(N)) C5=(r(mean)) D5=(r(sd)) using "directory and file name", sheet(sheet name, replace) modify
I understand that the margins command might be an alternative way of getting these regression-adjusted means? Is that correct? If so, can somebody write a snippet of sample code and help me understand how the results I would get from margins would differ from our current approach? In the past, I have only calculated marginal effects for logit models, so I'm having trouble understanding what this would mean in the context of a continuous outcome. I suspect the problem might be a semantics issues between disciplines.
Another option recommended to us was to calculate effect sizes by hand using the coefficient for the variable of interest and the standard deviation of the "control group", i.e. children in low quality group. This approach was recommended by an experimental researcher, but our data are from a correlational study.
Thank you,
Aleksandra
My research team is running some multi-level models to examine the associations between measures of child care center quality and child outcomes.
The generic form of the model is shown below. Spring_score is the child's assessment score in the spring, e.g. Woodcock Johnson Tests of Achievement. We control for the fall score, child covariates, and site-level covariates in the model, and treat the measure of site quality as our variable of interest.
xtset siteid
xtreg spring_score site_quality fall_score student_covariates site_covariates, re vce(robust)
Is there any way that I can generate standardized regression coefficients from the model? That would be our preference, but we haven't been able to figure out how to do that. If generating standardized coefficients is not an option, what other choices do we have for examining results for multiple outcomes that have different scales?
For now we are generating predicted values for the spring test score and calculating the means for groups defined by the values on the site quality variable (which is a dummy). Our code for that is as follows.
predict yhat_spring_score
putexcel A3=("Adjusted Means") B3=("N") c3=("Mean") D3=("SD") using "directory and file name", sheet(sheet name, replace) modify
sum yhat_spring_score if site_quality==1
putexcel A4=("site_quality = 1") B4=(r(N)) C4=(r(mean)) D4=(r(sd)) using "directory and file name", sheet(sheet name, replace) modify
sum yhat_spring_score if site_quality'==0
putexcel A5=("site_quality = 0") B5=(r(N)) C5=(r(mean)) D5=(r(sd)) using "directory and file name", sheet(sheet name, replace) modify
I understand that the margins command might be an alternative way of getting these regression-adjusted means? Is that correct? If so, can somebody write a snippet of sample code and help me understand how the results I would get from margins would differ from our current approach? In the past, I have only calculated marginal effects for logit models, so I'm having trouble understanding what this would mean in the context of a continuous outcome. I suspect the problem might be a semantics issues between disciplines.
Another option recommended to us was to calculate effect sizes by hand using the coefficient for the variable of interest and the standard deviation of the "control group", i.e. children in low quality group. This approach was recommended by an experimental researcher, but our data are from a correlational study.
Thank you,
Aleksandra
Comment