Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Vincent Li View Post
    . . .my questions first. The results of the -testparm- do not convey the same information as those from the random effect model. The coefficients in the random effect indicate the average score changes from pre to po1/po2 under each condition (e.g. 1.condition#0.period, 1.condition#1.period). But the -testparm- outcomes compare the score changes between po1's change and po2's change under each condition, or the pre to po1/po2 score changes attributed to varied conditions. Did I interpret the outcomes correctly?
    I can't quite follow you fully except to say that (1) the two testparm commands that I showed above in #12 address your two primary research questions and (2) the testparm command that you showed in #13 does not address either of your two primary research questions and its result is therefore irrelevant. What do the two examples of the testparm commands that I gave above in #12 show?

    Furthermore, is the test comparable to what repeated anova does? . . I was asked why I didn't perform a repeated ANOVA first to examine whether the condition#period interaction term shows differences.
    The answer to whoever asked you is (1) it isn't necessary or even desirable to fit a repeated-measures ANOVA either before or after fitting the random effects linear regression model. It doesn't add anything to the analysis and it doesn't help address your two primary research questions over and above the code that I gave above in #12. And (2) it isn't necessary or even desirable to first test whether the overall condition × period interaction term is "statistically significant" in order to assess your two primary research questions, which I assume were specified a priori.

    Originally posted by Vincent Li View Post
    To figure out the differences in using random effect and repeated anova, I ran following commands and got corresponding outcomes: . . . According to repeated anova, it shows there are no sig dif in srs scores between condition*period groups. However, this is not consistent with the random effect outcomes. How should I interpret their inconsistency? . . .Is it more comparable to what the random effect model did?
    Neither of your syntax examples for repeated-measures ANOVA is correct. Regardless, repeated-measures ANOVA is not what you want in order to assess your two primary research questions. The code that I showed for xtreg (you can add the time-invariant covariates), followed by the two postestimation testparm examples, will get you most directly to the assessments that you seek.

    The next step would be to forgo all of the distraction over repeated-measures ANOVA and plot the marginal effects using marginsplot which I believe will be invaluable to interpretation of the results of your study.

    Comment


    • #17
      Originally posted by Joseph Coveney View Post
      I can't quite follow you fully except to say that (1) the two testparm commands that I showed above in #12 address your two primary research questions and (2) the testparm command that you showed in #13 does not address either of your two primary research questions and its result is therefore irrelevant. What do the two examples of the testparm commands that I gave above in #12 show?

      The answer to whoever asked you is (1) it isn't necessary or even desirable to fit a repeated-measures ANOVA either before or after fitting the random effects linear regression model. It doesn't add anything to the analysis and it doesn't help address your two primary research questions over and above the code that I gave above in #12. And (2) it isn't necessary or even desirable to first test whether the overall condition × period interaction term is "statistically significant" in order to assess your two primary research questions, which I assume were specified a priori.

      Neither of your syntax examples for repeated-measures ANOVA is correct. Regardless, repeated-measures ANOVA is not what you want in order to assess your two primary research questions. The code that I showed for xtreg (you can add the time-invariant covariates), followed by the two postestimation testparm examples, will get you most directly to the assessments that you seek.

      The next step would be to forgo all of the distraction over repeated-measures ANOVA and plot the marginal effects using marginsplot which I believe will be invaluable to interpretation of the results of your study.


      I apologize for the confusion Joseph. Let me sort it out.

      First, I explore how the SRS score changes from T0 to T1 and T2 under each condition.
      The code is:
      Code:
      xtreg srs i.condition##i.period, i(indi_num) re vce(robust)
      The outcomes are:
      Code:
      Random-effects GLS regression                   Number of obs     =        366
      Group variable: indi_num                        Number of groups  =        122
      
      R-squared:                                      Obs per group:
           Within  = 0.1207                                         min =          3
           Between = 0.0129                                         avg =        3.0
           Overall = 0.0273                                         max =          3
      
                                                      Wald chi2(8)      =      22.56
      corr(u_i, X) = 0 (assumed)                      Prob > chi2       =     0.0040
      
                                       (Std. err. adjusted for 122 clusters in indi_num)
      ----------------------------------------------------------------------------------
                       |               Robust
                   srs | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
      -----------------+----------------------------------------------------------------
             condition |
                    1  |  -2.707317     4.5169    -0.60   0.549    -11.56028    6.145645
                    2  |   .1810976   4.571382     0.04   0.968    -8.778647    9.140842
                       |
                period |
                    1  |   -8.97561   2.776295    -3.23   0.001    -14.41705   -3.534171
                    2  |  -8.268293   2.256088    -3.66   0.000    -12.69014   -3.846442
                       |
      condition#period |
                  1 1  |   6.609756   3.346904     1.97   0.048     .0499453    13.16957
                  1 2  |   3.439024   3.061285     1.12   0.261    -2.560983    9.439032
                  2 1  |    7.92561   3.131587     2.53   0.011     1.787812    14.06341
                  2 2  |   6.593293   2.895518     2.28   0.023     .9181814     12.2684
                       |
                 _cons |    94.2439   3.354506    28.09   0.000     87.66919    100.8186
      -----------------+----------------------------------------------------------------
               sigma_u |  18.959945
               sigma_e |  8.8829199
                   rho |  .82000724   (fraction of variance due to u_i)
      ----------------------------------------------------------------------------------
      The results show that there were decreases in scores in Condition 0 (-8.98**, -8.27***) and increases in Condition 2 (7.93*, 6.59*).


      Second, I want to compare the differences among conditions at the same timepoint. This follows your suggestions (take period==1 as an example):
      Code:
      testparm i.condition##i.period
      ( 1) 1.condition = 0
      ( 2) 2.condition = 0
      ( 3) 1.period = 0
      ( 4) 2.period = 0
      ( 5) 1.condition#1.period = 0
      ( 6) 1.condition#2.period = 0
      ( 7) 2.condition#1.period = 0
      ( 8) 2.condition#2.period = 0

      chi2( 8) = 22.56
      Prob > chi2 = 0.0040

      Code:
      testparm 1.period 1.condition#1.period 2.condition#1.period
      ( 1) 1.period = 0
      ( 2) 1.condition#1.period = 0
      ( 3) 2.condition#1.period = 0

      chi2( 3) = 12.58
      Prob > chi2 = 0.0056

      Code:
      testparm 1.condition#1.period 1.period 
      testparm 2.condition#1.period 1.period 
      testparm 1.condition#1.period 2.condition#1.period

      /*
      testparm 1.condition#1.period 1.period

      ( 1) 1.period = 0
      ( 2) 1.condition#1.period = 0

      F( 2, 121) = 6.06
      Prob > F = 0.0031

      .

      . testparm 2.condition#1.period 1.period

      ( 1) 1.period = 0
      ( 2) 2.condition#1.period = 0

      F( 2, 121) = 5.52
      Prob > F = 0.0051

      . testparm 1.condition#1.period 2.condition#1.period

      ( 1) 1.condition#1.period = 0
      ( 2) 2.condition#1.period = 0

      F( 2, 121) = 3.23
      Prob > F = 0.0428

      */

      Then I tried -test- instead:
      Code:
      test 1.condition#1.period=1.period
      test 1.period=2.condition#1.period
      test 1.condition#1.period=2.condition#1.period

      . test 1.condition#1.period=1.period

      ( 1) - 1.period + 1.condition#1.period = 0

      chi2( 1) = 7.08
      Prob > chi2 = 0.0078

      . test 1.period=2.condition#1.period

      ( 1) 1.period - 2.condition#1.period = 0

      chi2( 1) = 8.67
      Prob > chi2 = 0.0032

      test 1.condition#1.period=2.condition#1.period

      ( 1) 1.condition#1.period - 2.condition#1.period = 0

      chi2( 1) = 0.31
      Prob > chi2 = 0.5779
      The outcomes from the -testparm- indicate that the SRS scores in Condition 1/Condition 2 and those in Condition 0 are not equal to 0 simultaneously.
      The -test- results show that the SRS scores in Condition 0 are significantly different from those in Condition 1/Condition 2.
      However, there are some differences in the results from two commands regarding 1. Condition #1, Period and 2. Condition #1, Period?

      Third, in terms of repeated ANOVA. I'm curious whether its outcomes are consistent with random effect. But the commands seem to be incorrect, the outcomes are weird. I'll explore them next week.
      Code:
      anova srs i.condition##i.period indi_num, repeated (period) bse(indi_num)
      anova srs i.condition##i.period indi_num, repeated (period) bse(indi_num)

      Number of obs = 366 R-squared = 0.8833
      Root MSE = 8.88292 Adj R-squared = 0.8210

      Source | Partial SS df MS F Prob>F
      -----------------+----------------------------------------------------
      Model | 142108.42 127 1118.9639 14.18 0.0000
      |
      condition | 7472.8889 2 3736.4444 47.35 0.0000
      period | 1705.2682 2 852.63408 10.81 0.0000
      condition#period | 852.84952 4 213.21238 2.70 0.0313
      indi_num | 137724.03 119 1157.3448 14.67 0.0000
      |
      Residual | 18779.691 238 78.906267
      -----------------+----------------------------------------------------
      Total | 160888.11 365 440.78935


      Between-subjects error term: indi_num
      Levels: 122 (119 df)
      Lowest b.s.e. variable: indi_num

      Repeated variable: period
      Huynh-Feldt epsilon = 0.9672
      Greenhouse-Geisser epsilon = 0.9367
      Box's conservative epsilon = 0.5000

      ------------ Prob > F ------------
      Source | df F Regular H-F G-G Box
      -----------------+----------------------------------------------------
      period | 2 10.81 0.0000 0.0000 0.0001 0.0013
      condition#period | 4 2.70 0.0313 0.0330 0.0346 0.0712
      Residual | 238
      ----------------------------------------------------------------------
      Code:
       pwcompare i.condition##i.period, pveffects mcompare(bonferroni)
      Pairwise comparisons of marginal linear predictions

      Margins: asbalanced

      -------------------------------
      | Number of
      | comparisons
      -----------------+-------------
      condition | 3
      period | 3
      condition#period | 36
      -------------------------------

      ---------------------------------------------------------
      | Bonferroni
      | Contrast Std. err. t P>|t|
      -----------------+---------------------------------------
      condition |
      1 vs 0 | . (not estimable)
      2 vs 0 | . (not estimable)
      2 vs 1 | . (not estimable)
      |
      period |
      1 vs 0 | -4.130488 1.137418 -3.63 0.001
      2 vs 0 | -4.924187 1.137418 -4.33 0.000
      2 vs 1 | -.7936992 1.137418 -0.70 1.000
      |
      condition#period |
      (0 1) vs (0 0) | -8.97561 1.961909 -4.57 0.000
      (0 2) vs (0 0) | -8.268293 1.961909 -4.21 0.001
      (1 0) vs (0 0) | . (not estimable)
      (1 1) vs (0 0) | . (not estimable)
      (1 2) vs (0 0) | . (not estimable)
      (2 0) vs (0 0) | . (not estimable)
      (2 1) vs (0 0) | . (not estimable)
      (2 2) vs (0 0) | . (not estimable)
      (0 2) vs (0 1) | .7073171 1.961909 0.36 1.000
      (1 0) vs (0 1) | . (not estimable)
      (1 1) vs (0 1) | . (not estimable)
      (1 2) vs (0 1) | . (not estimable)
      (2 0) vs (0 1) | . (not estimable)
      (2 1) vs (0 1) | . (not estimable)
      (2 2) vs (0 1) | . (not estimable)
      (1 0) vs (0 2) | . (not estimable)
      (1 1) vs (0 2) | . (not estimable)
      (1 2) vs (0 2) | . (not estimable)
      (2 0) vs (0 2) | . (not estimable)
      (2 1) vs (0 2) | . (not estimable)
      (2 2) vs (0 2) | . (not estimable)
      (1 1) vs (1 0) | -2.365854 1.961909 -1.21 1.000
      (1 2) vs (1 0) | -4.829268 1.961909 -2.46 0.524
      (2 0) vs (1 0) | . (not estimable)
      (2 1) vs (1 0) | . (not estimable)
      (2 2) vs (1 0) | . (not estimable)
      (1 2) vs (1 1) | -2.463415 1.961909 -1.26 1.000
      (2 0) vs (1 1) | . (not estimable)
      (2 1) vs (1 1) | . (not estimable)
      (2 2) vs (1 1) | . (not estimable)
      (2 0) vs (1 2) | . (not estimable)
      (2 1) vs (1 2) | . (not estimable)
      (2 2) vs (1 2) | . (not estimable)
      (2 1) vs (2 0) | -1.05 1.986281 -0.53 1.000
      (2 2) vs (2 0) | -1.675 1.986281 -0.84 1.000
      (2 2) vs (2 1) | -.625 1.986281 -0.31 1.000
      ---------------------------------------------------------

      in the post hoc, only this part is align with the results of the random effect:
      condition#period |
      (0 1) vs (0 0) | -8.97561 1.961909 -4.57 0.000
      (0 2) vs (0 0) | -8.268293 1.961909 -4.21 0.001




      Comment

      Working...
      X