Hi all, I am running a multilevel model which shows as follows:
Below is the model result
Then I use predict re*, reffects and I got the following output:
My question is what is re1 and re2 respectively? I assume one of them is random effect of intercept and the other one is for slope, right? If I want to further calculate predicted probabilities using the following syntax, which re should I take into account? Below I only include re1 but I was wondering should I also plug in re2 somewhere.
Many thanks!!
Code:
melogit gedtimehi c.grade##c.grade ib4.RACE ib4.RACE#c.grade || kindergarten:, || studentnum2:, cov(un)
Code:
Fitting fixed-effects model: Iteration 0: log likelihood = -6232.0048 Iteration 1: log likelihood = -6221.352 Iteration 2: log likelihood = -6221.3449 Iteration 3: log likelihood = -6221.3449 Refining starting values: Grid node 0: log likelihood = -5295.7391 Fitting full model: Iteration 0: log likelihood = -5295.7391 (not concave) Iteration 1: log likelihood = -5274.2167 Iteration 2: log likelihood = -4906.4134 Iteration 3: log likelihood = -4848.274 Iteration 4: log likelihood = -4825.8941 Iteration 5: log likelihood = -4824.2776 Iteration 6: log likelihood = -4823.8966 Iteration 7: log likelihood = -4823.824 Iteration 8: log likelihood = -4823.8111 Iteration 9: log likelihood = -4823.8083 Iteration 10: log likelihood = -4823.8077 Iteration 11: log likelihood = -4823.8077 Mixed-effects logistic regression Number of obs = 11088 ----------------------------------------------------------- | No. of Observations per Group Group Variable | Groups Minimum Average Maximum ----------------+------------------------------------------ kindergarten | 41 2 270.4 588 studentnum2 | 4261 1 2.6 10 ----------------------------------------------------------- Integration method: mvaghermite Integration points = 7 Wald chi2(8) = 147.24 Log likelihood = -4823.8077 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------------------ gedtimehi | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------------------+---------------------------------------------------------------- gradelvl | .2788558 .0577974 4.82 0.000 .165575 .3921366 | c.gradelvl#c.gradelvl | -.0458036 .0062849 -7.29 0.000 -.0581218 -.0334854 | RACE | African American | -.943746 .2228096 -4.24 0.000 -1.380445 -.5070473 Am Indian | -.4564835 .2429516 -1.88 0.060 -.9326599 .0196928 Asian | .0272462 .2178825 0.13 0.900 -.3997956 .454288 | RACE#c.gradelvl | African American | -.0585262 .0475355 -1.23 0.218 -.151694 .0346415 Am Indian | .1767307 .058828 3.00 0.003 .06143 .2920313 Asian | -.1326549 .0477209 -2.78 0.005 -.2261861 -.0391237 | _cons | 3.26059 .2499694 13.04 0.000 2.770659 3.750522 -------------------------+---------------------------------------------------------------- kindergarten | var(_cons)| 1.161754 .2758725 .7294345 1.850301 -------------------------+---------------------------------------------------------------- kindergarten>studentnum2 | var(_cons)| 13.27539 1.272639 11.00138 16.01944 ------------------------------------------------------------------------------------------ LR test vs. logistic regression: chi2(2) = 2795.07 Prob > chi2 = 0.0000 Note: LR test is conservative and provided only for reference.
Code:
. predict re*, reffects (calculating posterior means of random effects) (using 7 quadrature points) (154663 missing values generated) . describe re1 re2 storage display value variable name type format label variable label ---------------------------------------------------------------------------------------------------------- re1 float %9.0g empirical Bayes' means for _cons[kindergarten] re2 float %9.0g empirical Bayes' means for _cons[kindergarten>studentnum2]
Code:
matrix m = J(10, 5, 0) forvalues j =0(1)9 { matrix m[`= `j' + 1', 1] = `j' forval i =1/4{ tempvar rxb`i' p`i' qui g `rxb`i'' = _b[`i'.RACE]*1 + _b[grade]*`j' + _b[c.grade#c.grade]*`j'*`j' + _b[_cons] + re1 qui g `p`i'' = exp(`rxb`i'')/(1+exp(`rxb`i'')) qui sum `p`i'' if e(sample) matrix m[`=`j'+1', `=`i'+1'] = r(mean) } }
Comment