Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • pvalue of random effects in mixed command

    Hi, I am now using the mixed command to analyze my data. I noted that the output for random effects only shows the confidence interval. Are there any ways to get the p-value of random effects? Really appreciate!

  • #2
    r(table) is your friend. See p-value row . (Missing values in made-up example below, but likely genuine in your data.)

    Code:
    . sysuse auto, clear
    (1978 automobile data)
    
    . mixed price weight || rep78:
    
    Performing EM optimization ...
    
    Performing gradient-based optimization: 
    Iteration 0:  Log likelihood = -635.61745  
    Iteration 1:  Log likelihood = -635.50194  
    Iteration 2:  Log likelihood = -635.48706  
    Iteration 3:  Log likelihood = -635.48701  
    
    Computing standard errors ...
    
    Mixed-effects ML regression                          Number of obs    =     69
    Group variable: rep78                                Number of groups =      5
                                                         Obs per group:
                                                                      min =      2
                                                                      avg =   13.8
                                                                      max =     30
                                                         Wald chi2(1)     =  29.59
    Log likelihood = -635.48701                          Prob > chi2      = 0.0000
    
    ------------------------------------------------------------------------------
           price | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
    -------------+----------------------------------------------------------------
          weight |    2.01242   .3699558     5.44   0.000      1.28732     2.73752
           _cons |    44.3284   1158.895     0.04   0.969    -2227.065    2315.722
    ------------------------------------------------------------------------------
    
    ------------------------------------------------------------------------------
      Random-effects parameters  |   Estimate   Std. err.     [95% conf. interval]
    -----------------------------+------------------------------------------------
    rep78: Identity              |
                      var(_cons) |   1.37e-06   .0025382             0           .
    -----------------------------+------------------------------------------------
                   var(Residual) |    5850492   996055.4       4190579     8167907
    ------------------------------------------------------------------------------
    LR test vs. linear model: chibar2(01) = 0.00          Prob >= chibar2 = 1.0000
    
    . matrix list r(table)
    
    r(table)[9,4]
                 price:      price:      rep78:   Residual:
                weight       _cons   var(_cons)      var(e)
         b   2.0124198     44.3284   1.366e-06   5850491.8
        se   .36995584   1158.8953   .00253823   996055.36
         z   5.4396216   .03825056          .b          .b
    pvalue   5.339e-08   .96948791          .b          .b
        ll   1.2873196  -2227.0647           0   4190578.5
        ul   2.7375199   2315.7215           .   8167906.7
        df           .           .           .           .
      crit    1.959964    1.959964    1.959964    1.959964
     eform           0           0           0           0

    Comment


    • #3
      I think OP is referring to the absence of p-values for the random-effects parameters, which are listed as .b in the matrix.

      Comment


      • #4
        The p-values are problematic in this case as the null hypothesis is typically that the variance equals 0 and a variance cannot be negative, meaning that the null-hypothesis is "on the edge of the parameter space". Normal (pun intended) methods for computing p-values don't work here, and the p-values that Stata's standard machinery for computing these would be wrong.
        ---------------------------------
        Maarten L. Buis
        University of Konstanz
        Department of history and sociology
        box 40
        78457 Konstanz
        Germany
        http://www.maartenbuis.nl
        ---------------------------------

        Comment


        • #5
          +1 to Maarten. You don't show us your model, so it is hard to tell you how best to move forward. But you can use likelihood ratio (Chi-square) testing to achieve something like statistical significance tests for random effects. See this helpful presentation by Oscar Torres-Reyna. Below is some simple code to walk you through the testing process.
          Code:
          use https://www.stata-press.com/data/r19/pig, clear
          mixed weight week || id: 
          // The test that the random intercept is 0 is provided at the bottom of the mixed output:
          // LR test vs. linear model: chibar2(01) = 472.65        Prob >= chibar2 = 0.000
          estimates store ri        // store model results for later testing
          
          * Add random slope and slope-intercept covariance
          mixed weight week || id: week, cov(un)
          estimates store rc
          lrtest rc ri, stats     
          // test of whether the addition of the random slope and slope-intercept covariance provide superior fit
          Output of likelihood ratio test:
          Code:
          Likelihood-ratio test                                 LR chi2(2)  =    291.93
          (Assumption: ri nested in rc)                         Prob > chi2 =    0.0000
          
          Note: The reported degrees of freedom assumes the null hypothesis is not on the boundary of the
                parameter space.  If this is not true, then the reported test is conservative.
          
          Akaike's information criterion and Bayesian information criterion
          
          -----------------------------------------------------------------------------
                 Model |          N   ll(null)  ll(model)      df        AIC        BIC
          -------------+---------------------------------------------------------------
                    ri |        432          .  -1014.927       4   2037.854   2054.127
                    rc |        432          .  -868.9619       6   1749.924   1774.334
          -----------------------------------------------------------------------------
          Note: BIC uses N = number of observations. See [R] BIC note.

          Comment


          • #6
            Originally posted by Maarten Buis View Post
            The p-values are problematic in this case as the null hypothesis is typically that the variance equals 0 and a variance cannot be negative, meaning that the null-hypothesis is "on the edge of the parameter space". Normal (pun intended) methods for computing p-values don't work here, and the p-values that Stata's standard machinery for computing these would be wrong.
            Thanks a lot, Maarten! I agree with you but I noted that some literature report the significance of random effects, which confused me a lot.

            Comment


            • #7
              Originally posted by Yuhan HU View Post
              I agree with you but I noted that some literature report the significance of random effects, which confused me a lot.
              The fact that something is published does not guarantee it is true. Lots of mistakes get published. Alternatively, these authors may have done the extra work to compute the correct p-values. To determine what is happening in those articles, you will probably need to look at their code.
              ---------------------------------
              Maarten L. Buis
              University of Konstanz
              Department of history and sociology
              box 40
              78457 Konstanz
              Germany
              http://www.maartenbuis.nl
              ---------------------------------

              Comment

              Working...
              X