Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sebastian Kripfganz
    replied
    You do not necessarily need stationarity tests.

    I have a couple of comments about your specification:
    1. For the instruments in the iv() option, you are implicitly assuming that all of those variables are uncorrelated with the unobserved country-fixed effects. This is often hardly justifiable with such macroeconomic data.
    2. You are using a system GMM estimator. It is almost never justified to specify the nocons option when you do not have time dummies (as in your second specification). This has the potential to substantially bias your results. There is not really a justification anyway to leave out the time dummies in the subsamples.
    3. Subsample analysis can be difficult if the number of countries in those subsamples are very small. You may not get reliable estimates. The total number of instruments is actually not the most important metric. The number of overidentifying restrictions is what matters. It seems to me that you only have 2 overidentifying restrictions (see the degrees of freedom of the Hansen test) in your second specification. That's quite unproblematic, assuming you still have a reasonably large number of countries in each subsample.
    4. xtdpdgmm cannot exactly replicate your specifications because of the particular way the iv() option is implemented in xtabond2. Notice that iv() without the equation() suboption is not the same as the combination of two iv() options, one with eq(diff) and one with eq(level). If this surprises you, then I recommend to explicitly specify all instruments with the eq() suboption to ensure that you really get what you want. This also assists you in carefully thinking about what instruments you really want to specify.
    5. Instead of trying to replicate your current xtabond2 specification with xtdpdgmm, I suggest that you rebuild your model from scratch (with either command). Think first about the assumptions for each variable (strictly exogenous, predetermined, endogenous; correlated/uncorrelated with the fixed effects) and then build the instruments accordingly. Before you specify a gmm() or an iv() option, make sure to understand its implications. My 2019 London Stata Conference presentation can serve as a guideline.

    Leave a comment:


  • Abiola Ajila
    replied
    Hello Prof. Sebastian

    I have a dataset of 114 countries (N) across 18 year- time period (T). Is it necessary to perform stationarity test?

    For the full sample, I ran the below model for the effect of agric trade openness (lafto) on prevalence of obesity (lPoO)

    xtabond2 lPoO l.lPoO l2.lPoO lafto ///
    lgdpcap lrp lal lap gdpgr pg ind ac infl ///
    i.year, gmm(l.lPoO, lag (1 .) collapse) ///
    gmm(lafto, lag (1 .) collapse) ///
    iv(lgdpcap lrp lal lap gdpgr pg ind ac infl ///
    i.year) ///
    nodiffsargan twostep robust small nocons

    I would like to run a similar model for sub-samples: income categories (such as Low income, lower middle income, upper middle income and HI income countries).
    I ran the below model but I realized that the number of instruments is greater than the number of groups.

    xtabond2 lPoO l.lPoO lafto ///
    lgdpcap lrp lal lap gdpgr pg ind ac infl if inc_gr=="LI", ///
    gmm(l.lPoO, lag (1 2) collapse) ///
    gmm(lafto, lag (1 2) collapse) ///
    iv(lgdpcap lrp lal lap gdpgr pg ind ac infl) ///
    nodiffsargan twostep robust small nocons

    How can I run the xtdpdgmm for my estimations (full sample and sub-samples)?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    1. For the endogenous variable X1, the first admissible lag as an instrument in the first-differenced model is lag 2. Thus, you need to change your first gmm() option into gmm(X1, lag(2 8)). Everything else looks okay.
    2. Your ivreg2 command specification makes much stronger assumptions. It assumes that all variables X1, X2, and X3 are uncorrelated with the unobserved group-specific effects (or that such effects are absent). This might become apparent when you look at it from the perspective of the equivalent xtdpdgmm code:
      Code:
      xtdpdgmm Y X1 X2 X3, iv(X2 X3 L.X1 L.X2, m(level)) twostep
    Last edited by Sebastian Kripfganz; 18 Jun 2022, 11:48.

    Leave a comment:


  • Sarah Magd
    replied
    Thanks a lot Prof. Kripfganz for this update on the xtdpdgmm code. I have run these doubly-corrected (DC) standard errors in estimating a static model. I have two questions:
    (1) Is this two-step system gmm correct for this static model (with X1: endogenous; and X2 and X3: predeterminants)
    xtdpdgmm Y X1 X2 X3, model(diff) collapse gmm(X1, lag(1 8)) gmm(X2 X3, lag(1 7)) gmm(X1, lag(1 1) diff model(level)) gmm(X2 X3, lag(0 0) diff model (level)) vce(r, dc) overid twostep
    Group variable: iso_num Number of obs = 364
    Time variable: year Number of groups = 28

    Moment conditions: linear = 26 Obs per group: min = 13
    nonlinear = 0 avg = 13
    total = 26 max = 13

    (Std. Err. adjusted for 28 clusters in iso_num)
    ------------------------------------------------------------------------------
    | DC-Robust
    Y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
    -------------+----------------------------------------------------------------
    X1 | .56047 .2827629 1.98 0.047 .0062648 1.114675
    X2 | .1636905 .0700395 2.34 0.019 .0264156 .3009653
    X3 | .0537871 .0110046 4.89 0.000 .0322184 .0753557
    _cons | 6.639132 1.125762 5.90 0.000 4.432678 8.845586
    ------------------------------------------------------------------------------
    Instruments corresponding to the linear moment conditions:
    1, model(diff):
    L1.X1 L2.X1 L3.X1 L4.X1 L5.X1 L6.X1 L7.X1 L8.X1
    2, model(diff):
    L1.X2 L2.X2 L3.X2 L4.X2 L5.X2 L6.X2 L7.X2 L1.X3 L2.X3 L3.X3 L4.X3 L5.X3
    L6.X3 L7.X3
    3, model(level):
    L1.D.X1
    4, model(level):
    D.X2 D.X3
    5, model(level):
    _cons

    . estat overid

    Sargan-Hansen test of the overidentifying restrictions
    H0: overidentifying restrictions are valid

    2-step moment functions, 2-step weighting matrix chi2(22) = 26.5779
    Prob > chi2 = 0.2277

    2-step moment functions, 3-step weighting matrix chi2(22) = 27.6004
    Prob > chi2 = 0.1893

    . estat serial

    Arellano-Bond test for autocorrelation of the first-differenced residuals
    H0: no autocorrelation of order 1: z = 2.1103 Prob > |z| = 0.0348
    H0: no autocorrelation of order 2: z = 0.3919 Prob > |z| = 0.6952

    ################################################## ####
    (2) Can I also estimate this static model with the Instrumental Variable GMM model using the following code:
    ivreg2 Y X2 X3 (X1 = l.X1 l.X2) ,gmm2s first robust


    2-Step GMM estimation
    ---------------------

    Estimates efficient for arbitrary heteroskedasticity
    Statistics robust to heteroskedasticity

    Number of obs = 308
    F( 3, 304) = 365.97
    Prob > F = 0.0000
    Total (centered) SS = 43.70746242 Centered R2 = 0.7327
    Total (uncentered) SS = 33773.57935 Uncentered R2 = 0.9997
    Residual SS = 11.6822856 Root MSE = .1948

    ------------------------------------------------------------------------------
    | Robust
    Y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
    -------------+----------------------------------------------------------------
    X1 | .3953722 .033303 11.87 0.000 .3300995 .4606449
    X2 | .378198 .027334 13.84 0.000 .3246244 .4317715
    X3 | .0524587 .0132118 3.97 0.000 .0265641 .0783534
    _cons | 4.559668 .2689765 16.95 0.000 4.032483 5.086852
    ------------------------------------------------------------------------------
    Underidentification test (Kleibergen-Paap rk LM statistic): 102.662
    Chi-sq(2) P-val = 0.0000
    ------------------------------------------------------------------------------
    Weak identification test (Cragg-Donald Wald F statistic): 1.1e+04
    (Kleibergen-Paap rk Wald F statistic): 9730.920
    Stock-Yogo weak ID test critical values: 10% maximal IV size 19.93
    15% maximal IV size 11.59
    20% maximal IV size 8.75
    25% maximal IV size 7.25
    Source: Stock-Yogo (2005). Reproduced by permission.
    NB: Critical values are for Cragg-Donald F statistic and i.i.d. errors.
    ------------------------------------------------------------------------------
    Hansen J statistic (overidentification test of all instruments): 0.406
    Chi-sq(1) P-val = 0.5242
    ------------------------------------------------------------------------------
    Instrumented: X1
    Included instruments: X2 X3
    Excluded instruments: L.X1 L2.X1
    ------------------------------------------------------------------------------

    Leave a comment:


  • Sebastian Kripfganz
    replied
    It is update time again. This update is all about standard errors. As a new feature, you can now obtain doubly-corrected (DC) standard errors (Hwang, Kang, and Lee; 2022, Journal of Econometrics) as an improvement over the familiar Windmeijer-corrected (WC) standard errors. As these authors point out, the DC standard errors correct for an "overidentification bias" in the variance estimation on top of the WC finite-sample correction. These DC standard errors are also misspecification robust, in the sense that the variance-covariance matrix is consistently estimated even if the moment conditions are misspecified. (Obviously, the estimator for the coefficients is still inconsistent under such misspecification.)

    All you need to do for DC standard errors is specifying the option vce(robust, dc). For backward-compatibility reasons, by default, vce(robust) continues to compute WC standard errors. DC standard errors are available for one-step, two-step, and iterated GMM estimators. For the time being, they are implemented for models with linear moment conditions only. For models with nonlinear moment conditions, WC standard errors are calculated instead.

    In this update, I also improved the calculation of WC standard errors for the iterated GMM estimator, using a simplification of the variance formula exploiting convergence of the iterated GMM estimator. This leads to slightly different standard error estimates than in previous versions. (If the iterated GMM estimator did not converge, the previous iterative variance formula is still applied, analogously to two-step estimation.) I also fixed a small bug in the calculation of conventional two-step standard errors with nonlinear moment conditions.

    As a technical comment with little relevance for most users: While scores computed with the postestimation command predict factor in the Windmeijer correction (if specified), they do not account for the double correction because the respective influence functions are nonstandard. Consequently, generated score variables under vce(robust, wc) and vce(robust, dc) are the same.

    The following table provides an overview about the implications of different options on your standard errors:
    vce(conventional) vce(robust, wc) vce(robust, dc)
    onestep nolevel non-robust SEs robust SEs (sandwich formula) DC-robust SEs
    onestep generally invalid SEs robust SEs (sandwich formula) DC-robust SEs
    onestep nl() robust SEs (sandwich formula) WC-robust SEs WC-robust SEs (for now)
    twostep robust SEs WC-robust SEs DC-robust SEs
    twostep nl() robust SEs WC-robust SEs WC-robust SEs (for now)
    igmm robust SEs WC-robust SEs DC-robust SEs
    igmm nl() robust SEs WC-robust SEs WC-robust SEs (for now)
    cugmm robust SEs robust SEs robust SEs
    cugmm nl() robust SEs robust SEs robust SEs
    The SE labels in the xtdpdgmm regression output have been adjusted accordingly.

    You can update to the latest version 4.2.1 of xtdpdgmm (or install it for the first time) by typing the following in Stata's command window:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata) replace
    Disclaimer: I have extensively tested this new version and cross-checked the results with alternative software, where possible. However, due to the complexity of the command and the variety of options, I cannot guarantee that the implementation is error-free. Please let me know if you spot any irregularities.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    I had a quick look into the first article. I believe they used a two-step "level" GMM estimator, with standard instruments for the level model only. They did not seem to specify which instruments they actually used. In any case, I do not think there is anything special about it. With xtdpdgmm, you would simply specify appropriate instruments with the iv() option for model(level). Of course, finding appropriate instruments is the key task.

    Leave a comment:


  • Sarah Magd
    replied
    For the IV-GMM, I am also confused about the naming of this method. As an example from the literature, Acheampong et al.(2021) mention that they use the instrumental variable generalized method of moment (IV-GMM) based on the Baum et al. (2003).
    In this case, would IV-GMM in ivreg2 be equivalent to the one-step sys-GMM in xtdpdgmm?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    • Yes, in a static model with predetermined/endogenous variables, the two-step GMM estimator can still be useful.
    • I do not know what is meant by IV-GMM here.

    Leave a comment:


  • Sarah Magd
    replied
    Thanks a lot for the constructive answer.

    - As far as I understood, if our model is specified as a static model, and we want to control for the fixed effects and obtain more efficient estimates, we can still use the two-step GMM estimator. Am I right?
    - Would you please clarify the difference between these methods: IV-GMM & sys-GMM. Because I found a paper in the literature where they use the IV-GMM to estimate a static model. They also claim that "the IV-GMM controls for the endogeneity problem and variable omission bias, and produces consistent estimates".
    - How can we estimate the IV-GMM using the xtdpdgmm?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    • Yes, the one-step estimator with the default weighting matrix yields the 2SLS estimator.
    • The two-step system GMM estimator would be generally more efficient because it accounts for the extra variance coming from the unobserved fixed effects. It would be equivalent to the 2SLS estimator only if there were no unobserved fixed effects (in other words, if their variance was zero).
    • Checking for serial correlation after a static model could still be useful. If you have predetermined or endogenous regressors, serial correlation would still affect the first admissible lag for the instruments. Serial correlation might also indicate that a dynamic model may be more appropriate to avoid an omitted-variable bias.

    Leave a comment:


  • Sarah Magd
    replied
    #########################################
    # Estimating a static model using the xtdpdgmm
    #########################################

    Dear Prof. Kripfganz,

    If we specify a model without the lagged dependent variable as a regressor (i.e., a static model), and this model suffers from the endogeneity problem (i.e., due to the reverse causality). Also, the specification should account for the unobserved fixed effects.
    - As far as I know, using the system GMM estimator with the default weighting matrix in xtdpdgmm would be equivalent to the 2SLS estimator. Am I right?
    - If we use the two-step GMM estimator with this specification, would it still be more efficient in this case than the one-step system GMM?
    - Do we also need to check the serial correlation after estimating the static model with GMM?
    Last edited by Sarah Magd; 11 Jun 2022, 03:45.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    From the information you have provided, it is not clear how exactly you specified the model. The full command syntax would help. Assuming that you are referring to the first-differenced model, the starting lag 0 is not valid for the lagged dependent variable because it is correlated with the first-differenced error term. You need to start from lag 1 instead. For an endogenous variable, the respective starting lag would be 2 (assuming no serial correlation of the idiosyncratic level errors).

    If you were using the forward-orthogonal transformation for the model instead of the first-difference transformation, the first admissible lag would be 0 for a predetermined variable and 1 for an endogenous variable.

    Leave a comment:


  • Neyati Ahuja
    replied
    Hello Prof. Sebastian

    I have a query in respect of predeterminant variable.
    When I took lag of dependent variable as the predetermined variable, with lag (0 0), (0 1),.....(0 .), the resulting coefficient obtained for the lag depenedent variable after running the xtdpdgmm command is positive but not significant. However, when the same is repeated with lag starting from 1 the results for the coefficient is positive as well as significant.
    I have checked for instrument validity using Sargan and Hansen Test also.
    1. What should be done in such situation.
    2. Is there a problem in model specification or we can take lag of predeterminant variable lag starting from 1.
    3. If ans to previous question is yes, what should be the starting of lag of endogenous variables. Can it still start from 1.

    Regards

    Leave a comment:


  • Sebastian Kripfganz
    replied
    I am afraid the last xtdpdgmm update (version 2.3.11) was premature and did more harm than good. I "fixed" a bug that actually wasn't one. I apologize for this mishap.

    With the now available latest version 2.4.0, the correct computations have been restored for estat serial and estat hausman. Furthermore, a minor bug in option auxiliary has been fixed which was introduced in version 2.3.10.

    As a major new feature, this latest version can now compute the continously-updating GMM estimator as an alternative to the two-step and iterated GMM estimators. Simply specify the new option cugmm. The CU-GMM estimator updates the weighting matrix simultaneously with the coefficient estimates while minimizing the objective function. This is in contrast to the iterated GMM estimator (of which the two-step estimator is a special case), which iterates back and forth between updating the coefficient estimates and the weighting matrix. As a technical comment: The CU-GMM objective function generally does not have a unique minimum. The estimator therefore can be sensitive to the choice of initial values. By default, xtdpdgmm uses the two-stage least squares estimates, ignoring any nonlinear moment conditions, as starting values for the numerical CU-GMM optimization. This seems to work fine.

    The following example illustrates the CU-GMM estimator, and how the xtdpdgmm results can be replicated with ivreg2 (up to minor differences due to the numerical optimization):
    Code:
    . webuse abdata
    
    . xtdpdgmm L(0/1).n w k, gmm(L.n w k, l(1 4) c m(d)) iv(L.n w k, d) cu nofooter
    
    Generalized method of moments estimation
    
    Fitting full model:
    
    Continously updating:
    Iteration 0:   f(b) =  .22189289  
    Iteration 1:   f(b) =  .08073713  
    Iteration 2:   f(b) =  .07655265  
    Iteration 3:   f(b) =  .07646044  
    Iteration 4:   f(b) =  .07645679  
    Iteration 5:   f(b) =  .07645673  
    
    Group variable: id                           Number of obs         =       891
    Time variable: year                          Number of groups      =       140
    
    Moment conditions:     linear =      16      Obs per group:    min =         6
                        nonlinear =       0                        avg =  6.364286
                            total =      16                        max =         8
    
    ------------------------------------------------------------------------------
               n | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
    -------------+----------------------------------------------------------------
               n |
             L1. |   .4342625   .1106959     3.92   0.000     .2173024    .6512225
                 |
               w |  -2.153388   .3702817    -5.82   0.000    -2.879126   -1.427649
               k |  -.0054155   .1221615    -0.04   0.965    -.2448477    .2340166
           _cons |   7.284639   1.123693     6.48   0.000     5.082241    9.487037
    ------------------------------------------------------------------------------
    
    . predict iv*, iv
     1, model(diff):
       L1.L.n L2.L.n L3.L.n L4.L.n L1.w L2.w L3.w L4.w L1.k L2.k L3.k L4.k
     2, model(level):
       D.L.n D.w D.k
     3, model(level):
       _cons
    
    . ivreg2 n (L.n w k = iv*), cue cluster(id) nofooter
    Iteration 0:   f(p) =  31.065005  (not concave)
    Iteration 1:   f(p) =  27.307398  (not concave)
    Iteration 2:   f(p) =  26.543788  (not concave)
    Iteration 3:   f(p) =  25.047573  (not concave)
    Iteration 4:   f(p) =  24.521102  (not concave)
    Iteration 5:   f(p) =  24.107293  (not concave)
    Iteration 6:   f(p) =  23.931765  (not concave)
    Iteration 7:   f(p) =  23.746613  (not concave)
    Iteration 8:   f(p) =  23.636564  
    Iteration 9:   f(p) =  23.304181  (not concave)
    Iteration 10:  f(p) =  23.241277  (not concave)
    Iteration 11:  f(p) =  23.178503  (not concave)
    Iteration 12:  f(p) =  23.125314  (not concave)
    Iteration 13:  f(p) =  23.074408  
    Iteration 14:  f(p) =  19.278726  
    Iteration 15:  f(p) =  12.160385  (not concave)
    Iteration 16:  f(p) =  11.700402  
    Iteration 17:  f(p) =   11.03222  (not concave)
    Iteration 18:  f(p) =  10.950583  (not concave)
    Iteration 19:  f(p) =  10.907663  
    Iteration 20:  f(p) =  10.800048  
    Iteration 21:  f(p) =  10.704051  
    Iteration 22:  f(p) =  10.703945  
    Iteration 23:  f(p) =  10.703942  
    Iteration 24:  f(p) =  10.703942  
    
    CUE estimation
    --------------
    
    Estimates efficient for arbitrary heteroskedasticity and clustering on id
    Statistics robust to heteroskedasticity and clustering on id
    
    Number of clusters (id) =          140                Number of obs =      891
                                                          F(  3,   139) =    83.84
                                                          Prob > F      =   0.0000
    Total (centered) SS     =  1601.042507                Centered R2   =   0.5099
    Total (uncentered) SS   =  2564.249196                Uncentered R2 =   0.6940
    Residual SS             =  784.7107633                Root MSE      =    .9385
    
    ------------------------------------------------------------------------------
                 |               Robust
               n | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
    -------------+----------------------------------------------------------------
               n |
             L1. |   .4342987   .1003318     4.33   0.000     .2376521    .6309453
                 |
               w |  -2.153233   .2986292    -7.21   0.000    -2.738535    -1.56793
               k |  -.0053816   .1162739    -0.05   0.963    -.2332742    .2225111
           _cons |   7.284114   .8901409     8.18   0.000     5.539469    9.028758
    ------------------------------------------------------------------------------
    To update to the new version, type the following in Stata's command window:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata) replace
    Disclaimer: I have extensively tested this new version. However, due to the complexity of the command, the variety of options, and the lack of alternative software to compare the results for some advanced options, I cannot guarantee that the implementation is error-free. Please let me know if you spot any irregularities.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    If either X1 or X2 is endogenous, then it usually makes sense to assume that their interaction X3 is endogenous as well. You can then just treat it the same way as any other endogenous variable.

    Leave a comment:

Working...
X