Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • vce(robust) using ppmlhdfe

    Dear list members,

    I have a quick question concerning ppmlhdfe (ssc describe ppmlhdfe). In the companion Stata Journal article, a comparison of standard errors (SEs) from a one-way fixed-effects model with xtpoisson is thus commented:

    [...] the estimated coeffcients for the variables are the same as those obtained with the xtpoisson command. However, the results for the estimates of the standard errors are diff different because, by default, ppmlhdfe reports robust standard errors (p. 107)
    A footnote further explains that requesting robust SEs in xtpoisson would still not obtain the same SEs, because:

    Stata replaces vce(robust) with vce(cluster ships) but does not apply a small-sample adjustment for the number of clusters
    I'd like to understand if this implies that, in this last case (xtpoisson with vce(robust) versus ppmlhdfe, with one fixed effect), the difference in SEs originates entirely from the lack of small-sample adjustment in xtpoisson - and if not, which other differences are involved.

    In other words, I am asking if ppmlhdfe also replaces the default vce(robust) with a SE clustered on whichever fixed effects are specified in the regression (perhaps through the absorb options).
    I'm using StataNow/MP 18.5

  • #2
    ppmlhdfe is from SSC, as you are asked to explain (FAQ Advice #12). -vce(robust)- traditionally referred to White standard errors in panel data estimators in Stata. But following some advances in the literature (e.g., Stock and Watson 2008), this was changed to -vce(cluster panelvar)-. So the description states that ppmlhdfe still implements White standard errors if you specify -vce(robust)-.

    Reference:
    Stock, J. H., and M. W. Watson. 2008. Heteroskedasticity-robust standard errors for fixed effects panel data regression. Econometrica 76: 155–174. https://doi.org/10.1111/j.0012-9682.2008.00821.x.
    Last edited by Andrew Musau; 07 Jul 2023, 09:17.

    Comment


    • #3
      Thanks Andrew (I did point to SSC though!). I am aware of the advances, and for this reason I thought the more recent ppmlhdfe might have followed the same road in incorporating them than Stata. I found the description ambiguous because the allusion to the absence of small-sample correction can be interpreted as pointing to the origin of the difference once robust SEs are requested in xtpoisson, which could make sense and in turn simply suggest that both commands replace vce(robust) with vce(cluster clustervar). At any rate, if you are sure about this, I'm happy to declare myself not puzzled anymore.
      I'm using StataNow/MP 18.5

      Comment


      • #4
        Originally posted by Matteo Pinna Pintor View Post
        (I did point to SSC though!)
        Correct. Sorry, my oversight.

        I thought the more recent ppmlhdfe might have followed the same road in incorporating them than Stata. I found the description ambiguous because the allusion to the absence of small-sample correction can be interpreted as pointing to the origin of the difference once robust SEs are requested in xtpoisson, which could make sense and in turn simply suggest that both commands replace vce(robust) with vce(cluster clustervar). At any rate, if you are sure about this, I'm happy to declare myself not puzzled anymore.
        I first noticed this with reghdfe, so I am careful these days with assuming robust= cluster(panel var) in the HDFE set of estimators. The easiest way to verify this is to use the poisson command (not an -xt- or panel data command) where robust still equals White standard errors and introduce the fixed effects using indicators.

        Code:
        webuse ships, clear
        *WHITE STANDARD ERRORS
        ppmlhdfe accident op_75_79 co_65_69 co_70_74 co_75_79, absorb(ship) vce(robust)
        poisson accident op_75_79 co_65_69 co_70_74 co_75_79 i.ship, vce(robust)
        Res.:

        Code:
         
        . *WHITE STANDARD ERRORS
        
        . 
        . ppmlhdfe accident op_75_79 co_65_69 co_70_74 co_75_79, absorb(ship) vce(robust)
        Iteration 1:   deviance = 1.6167e+02  eps = .         iters = 1    tol = 1.0e-04  min(eta) =  -2.14  P   
        Iteration 2:   deviance = 1.3971e+02  eps = 1.57e-01  iters = 1    tol = 1.0e-04  min(eta) =  -2.56      
        Iteration 3:   deviance = 1.3909e+02  eps = 4.46e-03  iters = 1    tol = 1.0e-04  min(eta) =  -2.69      
        Iteration 4:   deviance = 1.3909e+02  eps = 1.15e-05  iters = 1    tol = 1.0e-04  min(eta) =  -2.70      
        Iteration 5:   deviance = 1.3909e+02  eps = 2.66e-10  iters = 1    tol = 1.0e-05  min(eta) =  -2.70   S O
        ------------------------------------------------------------------------------------------------------------
        (legend: p: exact partial-out   s: exact solver   h: step-halving   o: epsilon below tolerance)
        Converged in 5 iterations and 5 HDFE sub-iterations (tol = 1.0e-08)
        
        HDFE PPML regression                              No. of obs      =         34
        Absorbing 1 HDFE group                            Residual df     =         25
                                                          Wald chi2(4)    =       8.14
        Deviance             =  139.0852637               Prob > chi2     =     0.0866
        Log pseudolikelihood = -118.4758775               Pseudo R2       =     0.6674
        ------------------------------------------------------------------------------
                     |               Robust
            accident | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
        -------------+----------------------------------------------------------------
            op_75_79 |   .2928003   .2736869     1.07   0.285    -.2436162    .8292168
            co_65_69 |   .5824489   .3063304     1.90   0.057    -.0179477    1.182846
            co_70_74 |   .4627844   .3591039     1.29   0.197    -.2410462    1.166615
            co_75_79 |  -.1951267   .3849763    -0.51   0.612    -.9496664     .559413
               _cons |    2.48605   .3500144     7.10   0.000     1.800034    3.172066
        ------------------------------------------------------------------------------
        
        Absorbed degrees of freedom:
        -----------------------------------------------------+
         Absorbed FE | Categories  - Redundant  = Num. Coefs |
        -------------+---------------------------------------|
                ship |         5           0           5     |
        -----------------------------------------------------+
        
        . 
        . poisson accident op_75_79 co_65_69 co_70_74 co_75_79 i.ship, vce(robust)
        
        Iteration 0:   log pseudolikelihood = -131.31515  
        Iteration 1:   log pseudolikelihood = -118.52901  
        Iteration 2:   log pseudolikelihood =  -118.4759  
        Iteration 3:   log pseudolikelihood = -118.47588  
        Iteration 4:   log pseudolikelihood = -118.47588  
        
        Poisson regression                                      Number of obs =     34
                                                                Wald chi2(8)  = 214.61
                                                                Prob > chi2   = 0.0000
        Log pseudolikelihood = -118.47588                       Pseudo R2     = 0.6674
        
        ------------------------------------------------------------------------------
                     |               Robust
            accident | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
        -------------+----------------------------------------------------------------
            op_75_79 |   .2928003   .2736875     1.07   0.285    -.2436174     .829218
            co_65_69 |   .5824489    .306331     1.90   0.057    -.0179488    1.182847
            co_70_74 |   .4627844   .3591047     1.29   0.197    -.2410478    1.166617
            co_75_79 |  -.1951267   .3849774    -0.51   0.612    -.9496685    .5594151
                     |
                ship |
                  2  |    1.79572     .38891     4.62   0.000      1.03347    2.557969
                  3  |  -1.252763   .5565759    -2.25   0.024    -2.343632   -.1618943
                  4  |  -.9044563   .6808243    -1.33   0.184    -2.238847    .4299348
                  5  |  -.1462833   .4229326    -0.35   0.729    -.9752159    .6826494
                     |
               _cons |   1.308451   .4401896     2.97   0.003     .4456948    2.171206
        ------------------------------------------------------------------------------
        
        .

        Comment


        • #5
          Thanks Andrew for the illustration - I was indeed partly moved by laziness in my request here, though again I was also unsure also about the correction issue.

          If I may digress slightly: do you (or anyone reading, of course) have experience with inference after this command? If so, do you have a good idea of the currently available options for error clustering with/after it? I'm thinking of resampling-based methods, but also distance-based analytic methods. For the former class, I couldn't find a command - I read boottest (ssc describe boottest) is not compatible. For the latter class, in honesty, I couldn't even properly identify applications to nonlinear models. The underlying motivating issue could be one of few clusters, for example.
          Last edited by Matteo Pinna Pintor; 13 Jul 2023, 09:50.
          I'm using StataNow/MP 18.5

          Comment


          • #6
            If you look at the documentation of ppmlhdfe (http://scorreia.com/help/ppmlhdfe.html #3 under "Description"), it is compatible with boottest from SSC.

            Allows two- and multi-way clustering, and can be used in combination with boottest to derive wild bootstrap inference

            Comment


            • #7
              This, sadly, appears not to be true - see here and here.
              I'm using StataNow/MP 18.5

              Comment


              • #8
                Correct. If you are able to estimate the model with poisson, then I think you can run boottest. Not sure about -xtpoisson, fe-. Perhaps the author David Roodman can advise once he sees this thread.

                Comment


                • #9
                  I am not sure about the cost-benefit of defaulting back to poisson - I would give up on the advanced check for existence of the estimates, although it is also true that I don't really have many fixed effects (20 additional intercepts now, although I might try something that increases them a bit).

                  My idea was to first clarify exactly which implementation options are currently available with ppmlhdfe. Not many, as far as I can see - but I may easily be missing out on some relatively new extension or command.

                  My main issue is one of few clusters, although in some cases these are also quite heterogeneous in terms of treatment status (hence, likely, treatment assignment). Down-scaling the clustering level is probably justifiable to some extent, and I might try to construct a middle-way or even default on a much more fine-grained cluster variable. I actually have two fine-grained cluster variables (PSU, survey strata), nested moreover, so if not much happens moving from one to the other one could say it suggests it's enough (not fully sure about this rule of thumb though).

                  Given that this is all geographical areas, and it's the physical aspects that matter, alternatives based on distance-dependent weights are also relevant - eg the work of Conley for OLS and IV. But again I can't find Stata implementations for nonlinear models.
                  I'm using StataNow/MP 18.5

                  Comment


                  • #10
                    boottest will work after poisson, but not xtpoisson. For poisson, it will perform the score bootstrap of Kline and Santos. We discuss this a bit in the Fast & Wild paper--it's probably not as big an improvement over classical tests as the wild bootstrap for linear models, but can still be an improvement.

                    Comment

                    Working...
                    X