Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mugi Jang
    replied
    Dear professor Kripfganz
    I have a question about your 2019 London Stata Conference presentation p.112

    "Again skipping some intermediate steps, we might be willing to treat k as strictly exogenous, using its contemporaneous term as an instrument for the model in mean deviations:"
    . xtdpdgmm L(0/2).n L(0/2).w k L(0/3).ys c.w#c.w c.w#c.k, model(fod) collapse gmm(n, lag(1 .)) ///
    > gmm(w, lag(0 .)) gmm(k, lag(0 .)) gmm(ys, lag(1 .)) gmm(c.w#c.w, lag(0 .)) gmm(c.w#c.k, lag(0 .)) ///
    > gmm(k, lag(0 0) model(md)) teffects two vce(r) overid
    Here you use simultaneously gmm(k, lag(0 .)) and gmm(k, lag(0 0) model(md))
    My doubt is that althought one is differened and another is mean differenced k,lag(0 0) is used twice as instrument in one equation
    then can I repeatably use one instrument variable in different form in one equation
    thanks lot you kind reply

    Leave a comment:


  • Sebastian Kripfganz
    replied
    xtdpd always uses a small-sample adjustment. While the small-sample adjustments differ across commands, none of them are necessarily wrong. I am unable to tell what xtdpd and xtabond2 do differently. In my view, xtdpdgmm follows the usual convention for small-sample standard error correction.

    Leave a comment:


  • Mugi Jang
    replied
    Thank Professor Sebastian

    1.When small option is removed xtdpdgmm and xtabond2 are coincident but xtdpd(stata manual example) is different from the others

    xtdpdgmm L(0/1).n L(0/2).(w k) yr1980-yr1984 year, model(diff) iv(L(0/1).(w k) yr1980-yr1984 year,diff ) gmm(L.n, lag(2 .)) gmm(L.n,diff lag(1 1) model(level)) overid w(ind)
    L1. | .9603675 .0945063 10.16 0.000 .7751387 1.145596

    xtabond2 L(0/1).n L(0/2).(w k) yr1980-yr1984 year, iv(L(0/1).(w k) yr1980-yr1984 year,equation(diff)) gmm(L.n,lag(2 .) equation(diff)) gmm(L.n,equation(level) lag(1 1)) h(2) ar(3)
    L1. | .9603675 .0945063 10.16 0.000 .7751387 1.145596

    xtdpd L(0/1).n L(0/2).(w k) yr1980-yr1984 year, div(L(0/1).(w k) yr1980-yr1984 year) dgmmiv(n, lag(3 .)) lgmmiv(n, lag(2)) hascons
    L1. | .9603675 .095608 10.04 0.000 .7729794 1.147756

    2.When small option is added ,xtdpd and xtabond2 are coincident, but not xtdpdgmm, so xtdpdgmm does not replicate the example of stata and xtdpd does not permit small option

    xtdpdgmm L(0/1).n L(0/2).(w k) yr1980-yr1984 year, model(diff) iv(L(0/1).(w k) yr1980-yr1984 year,diff ) gmm(L.n, lag(2 .)) gmm(L.n,diff lag(1 1) model(level)) overid w(ind) small
    L1. | .9603675 .095335 10.07 0.000 .7732074 1.147528

    xtabond2 L(0/1).n L(0/2).(w k) yr1980-yr1984 year, iv(L(0/1).(w k) yr1980-yr1984 year,equation(diff)) gmm(L.n,lag(2 .) equation(diff)) gmm(L.n,equation(level) lag(1 1)) h(2) ar(3) small
    L1. | .9603675 .095608 10.04 0.000 .7726711 1.148064

    xtdpd L(0/1).n L(0/2).(w k) yr1980-yr1984 year, div(L(0/1).(w k) yr1980-yr1984 year) dgmmiv(n, lag(3 .)) lgmmiv(n, lag(2)) hascons
    L1. | .9603675 .095608 10.04 0.000 .7729794 1.147756

    This is my conclusion

    Leave a comment:


  • Sebastian Kripfganz
    replied
    For the first example, add the nolevel option to the xtdpdgmm command. This will give you the same standard errors as the other commands.

    In the second example, you generally do not get identical standard errors when you also estimate an intercept. (Note that the other coefficients remain unchanged if you remove the intercept.) Out of the top of my head, I cannot tell you exactly what causes the small difference in standard errors when the intercept is included; it has something to do with the small-sample adjustment. (Without option small, the standard errors between xtabond2 and xtdpdgmm coincide.) I have double checked the code of xtdpdgmm; all it does with the small option is rescaling the variance-covariance matrix by factor N / (N - rank(e(V))) in this case. Interestingly, in your example e(V) is rank deficient; however, this does not seem to explain the differences. Also note that xtabond2 produces different standard errors with and without option nomata.

    In the third example, xtdpdgmm estimates the error variance from the level residuals, while xtdpd and xtabond2 estimate it from the first-differenced residuals. This is because with xtdpdgmm, model(diff) is only specified locally within the gmm() and iv() options; the global default used to compute the standard errors remains model(level). You can either set model(diff) globally, or add vce(, model(diff)) to also set it locally for the VCE. Admittedly, this is a subtle issue easily overlooked; it is a consequence of the flexibility the command provides. (You would still need to remove the small option again for exact replication.)

    Leave a comment:


  • Mugi Jang
    replied
    Yes here is the example1 of stata manual of xtabond
    use https://www.stata-press.com/data/r17/abdata
    . xtabond n l(0/1).w l(0/2).(k ys) yr1980-yr1984 year, lags(2) noconstant
    (some output omitted)
    L1. .6862261 .1486163 4.62 0.000 .3949435 .9775088

    xtdpd L(0/2).n L(0/1).w L(0/2).(k ys) yr1980-yr1984 year, noconstant div(L(0/1).w L(0/2).(k ys) yr1980-yr1984 year) dgmmiv(n)
    (some output omitted)
    L1. | .6862261 .1486163 4.62 0.000 .3949435 .9775088

    xtabond2 L(0/2).n L(0/1).w L(0/2).(k ys) yr1980-yr1984 year, gmm(L.n,lag(1 .)) iv( L(0/1).w L(0/2).(k ys) yr1980-yr1984 year ) nolevel small
    (some output omitted)
    L1. | .6862261 .1486163 4.62 0.000 .3943654 .9780869
    this is t value not z value because of small option but the S.E is coincident

    xtdpdgmm L(0/2).n L(0/1).w L(0/2).(k ys) yr1980-yr1984 year, model(diff) iv(L(0/1).w L(0/2).(k ys) yr1980-yr1984 year,diff) gmm(L.n, lag(1 .)) nocons small
    (some output omitted)
    L1. | .6862261 .1482452 4.63 0.000 .3951916 .9772607
    this is t value not z value because of small option and the S.E is marginally different

    Here is another example of xtdpd of stata nanual Example 5: Allowing for MA(1) errors

    xtdpd L(0/1).n L(0/2).(w k) yr1980-yr1984 year, div(L(0/1).(w k) yr1980-yr1984 year) dgmmiv(n, lag(3 .)) hascons
    (some output omitted)
    L1. | .8696303 .2014473 4.32 0.000 .4748008 1.26446

    xtabond2 L(0/1).n L(0/2).(w k) yr1980-yr1984 year, iv(L(0/1).(w k) yr1980-yr1984 year,eq(diff)) gmm(L.n, lag(2 .) eq(diff)) h(2) small
    (some output omitted)
    L1. | .8696303 .2014473 4.32 0.000 .4741513 1.265109

    xtdpdgmm L(0/1).n L(0/2).(w k) yr1980-yr1984 year, model(diff) iv(L(0/1).(w k) yr1980-yr1984 year, diff) gmm(L.n, lag(2 .)) small
    (some output omitted)
    L1. | .8696303 .2008722 4.33 0.000 .4752813 1.263979

    The S.E is delicately different from the first two result

    xtdpd L(0/1).n L(0/2).(w k) yr1980-yr1984 year, div(L(0/1).(w k) yr1980-yr1984 year) dgmmiv(n, lag(3 .)) lgmmiv(n, lag(2)) hascons
    (some output omitted)
    L1. | .9603675 .095608 10.04 0.000 .7729794 1.147756

    xtabond2 L(0/1).n L(0/2).(w k) yr1980-yr1984 year, iv(L(0/1).(w k) yr1980-yr1984 year,equation(diff)) gmm(L.n,lag(2 .) equation(diff)) gmm(L.n,equation(level) lag(1 1)) h(2) ar(3) small
    (some output omitted)
    L1. | .9603675 .095608 10.04 0.000 .7726711 1.148064

    xtdpdgmm L(0/1).n L(0/2).(w k) yr1980-yr1984 year, iv(L(0/1).(w k) yr1980-yr1984 year,diff model(diff)) gmm(L.n, lag(2 .)model(diff)) gmm(L.n,diff lag(1 1) model(level)) overid w(ind) small
    (some output omitted)
    L1. | .9603675 .1256404 7.64 0.000 .7137123 1.207023

    In this case the s.e value is very different from the first two results

    I just tried to replicate the examples of stata manual (xtabond, xtdpdsys ) with xtdpd, xtabond2 and xtdpdgmm

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Originally posted by Mugi Jang View Post
    Dear Sebastian Kripfganz
    Could you summarize the option about standard error to get the same value among xtdpd, xtabond2 and xtdpdgmm
    Thank you in advance
    As a starting point, could you please provide a replicable example, say, using the abdata data set, where standard errors are not aligned?

    Leave a comment:


  • Mugi Jang
    replied
    Dear Sebastian Kripfganz
    Could you summarize the option about standard error to get the same value among xtdpd, xtabond2 and xtdpdgmm
    Thank you in advance

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Filip Novinc
    Thank you very much for reporting the two bugs with the CU-GMM estimator and the portmanteau test, and for sharing your data with me. I was able to replicate the bugs and implement a fix for them. An update to version 2.6.3 is now available on my personal website:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata/) replace

    Leave a comment:


  • Sebastian Kripfganz
    replied
    (Numbering not consistent with your questions. Not sure what happened with your post above; this appears to be a bug in the forum software.)
    1. I would not necessarily be worried by these correlation coefficients; they might be large enought. You could check for potential identification problems with the underid command; see slides 43-47 of my 2019 London Stata Conference presentation.
    2. Potentially, yes. It might be enough to simply change the lags for those instruments (i.e. starting with one lag higher).
    3. Aside from maybe row 6, your difference-in-Hansen tests look all fine. I would not worry too much about row 6 (which still has a p-value higher than the usual critical values), given the overwhelming evidence for correct specification in the other rows. Regarding the interpretation, taking row 6 again, the test in column "Excluding" is a Hansen test for the model without the instruments in row 6 (see list below the regression output). The test does not provide evidence of model misspecification if those instruments are left out. The column "Difference" then is a test for the validity of the additional instruments in row 6 (assuming validity of all other instruments), which is marginally acceptable here. To check that the difference GMM estimator is fine, I would in the first place just estimate the model with the difference GMM estimator only, i.e. do not include the instruments from row 7 onwards and specify the nolevel option. Then check the Hansen test for that model.
    4. All xtdpdgmm postestimation command return results in r(); type return list after a postestimation command to see the full list. These results can then be fed into the usual commands for table generation.
    5. Regarding those error messages, would it be possible for you to send me your data set (or a suitable subset of it) by e-mail, so that I can replicate the error messages?
    6. These weak instrument procedures are not directly implemented in the xtdpdgmm package. Some might be available with ivreg2 or with other community-contributed commands that work after ivreg2. In that case, replicate the results first with ivreg2, as explained on slides 39-42 of my presentation.
    7. I do not have a good answer to the CUE question right now.
    8. You could again replicate the results with ivreg2 and then follow the procedure suggested by Baum et al. (2003). If there are problems with instrument relevance, underidentification tests with the underid command might detect them. I generally recommend to routinely carry out these underidentification tests.
    9. If there is indication of underidentification, then this would generally imply that your instruments are weak or highly multicollinear. The usual consequences from the weak-instruments literature apply (biased coefficients and test results, large standard errors/confidence intervals, etc.).
    10. Without looking into the Blundell and Bond (2000) paper, they might just have obtained predicted firm-specific effects (with option u of the predict command after xtdpdgmm) and then looked at their variance (simply computing summary statistics). Those group-fixed effects are the means of the error term for each firm. While these effects themselves cannot be estimated/predicted consistently, their variance estimate is consistent.
    11. Precision problems with the estimate of the weighting matrix typically occur when the cross-sectional sample size N is small, the number of instruments is large, or the instruments are weak. An indication might be large differences in the two versions of the Hansen test reported by estat overid after two-step estimation, or convergence problems of the iterated GMM estimator.
    12. If individuals have different starting points, a violation of the system GMM assumption might not be easily seen in the graphs. The best approach would usually be a difference-in-Hansen test, comparing the system GMM estimator with a difference GMM estimator (where the latter might possibly use nonlinear moment conditions as well to avoid weak identification problems). The residuals are already obtained for the model with log dependent variable; no need to apply logs again.

    Leave a comment:


  • Filip Novinc
    replied
    My apologies for the bigger post. I am trying to rearrange it, but am coming across a bug which doesn't let me edit the post. This is my first time posting on Stata forum. The question numeration is not correct. Questions 5 and 6 are numbered twice, but are two distinct question.

    Leave a comment:


  • Filip Novinc
    replied
    Dear professor Kripfganz,
    Thank you for all the provided information here. I have thoroughly read all the posts related to the xtdpdgmm thread and have thoroughly studied the dynamic panel data estimation for my doctoral disertation but still haven't figured out a couple of questions.
    I have a panel of N=500 firms from Croatian manufacturing industry (T=12) and am trying to estimate the impact of unit labour costs (lnulc1) on exports (lnexport3). Other regressors are lagged exports (l.lnxport3) material cost (lnumc), differenced fixed capital assets (dlnreal_K), workers employed (lnL) and intangible fixed assets (Inrealintangible_K). ln means the variable is in natural logarithm. I assume they are all endogenous – there might be unobserved firm characteristics such as quality of management or ownership that can affect behaviour of firms and their exports. I have several questions about my analysis and about xtdpdgmm in general:
    1. The correlation coefficient between (some) regressors and their differences is preety small - e.g. for the first available instrument as lnulc1 is treated as endogenous (this applies to longer lags aswell): Corr(d.lnulc1, l2.lnulc1) = -0,1385 ; Corr(lnulc1, d.lnulc1) = 0,1241. Could I be having a weak instrument problem even if diff-in-Hansen test is fine? Or since all instruments instrument all the regressors lead to better estimations so I don't have to worry about these correlations?
    2. If diff-in-Hansen test is not satisfying (p<0,1) for only one variable for example in levels, does that mean all the estimated coefficients are biased and inconsistent?
    3. Can you help please with diff-in-Hansen test intrepretation?
      Code:
      . xtdpdgmm lnexport3 l.lnexport3 lnulc1 lnL dlnreal_K lnumc l.lnrealintangible_K, ///
      	> model(diff) collapse gmm(lnexport3, lag(2 3)) ///
      	>                      gmm(lnulc1, lag(2 3)) ///
      	>                                          gmm(lnL, lag(2 3)) ///
      	>                                          gmm(dlnreal_K, lag(1 2)) ///
      	>                                          gmm(lnumc, lag(1 2)) ///
      	>                                          gmm(lnrealintangible_K, lag (1 2)) ///
      	>                                          ///
      	>                      gmm(lnexport3, lag(1 1) diff model(level)) ///
      	>                                          gmm(lnulc1, lag(1 1) diff model(level)) ///
      	>                                          gmm(lnL, lag(1 1) diff model(level)) ///
      	>                                          gmm(dlnreal_K, lag(0 0) diff model(level)) ///
      	>                                          gmm(lnumc, lag(0 0) diff model(level)) ///
      	>                      gmm(lnrealintangible_K, lag (0 0) diff model (level)) ///
      	> teffects twostep vce(robust, dc) small overid
      	
      	Generalized method of moments estimation
      	
      	Fitting full model:
      	Step 1         f(b) =  .01217351
      	Step 2         f(b) =  .02229234
      	
      	Fitting reduced model 1:
      	Step 1         f(b) =  .01825629
      	
      	Fitting reduced model 2:
      	Step 1         f(b) =  .01806326
      	
      	Fitting reduced model 3:
      	Step 1         f(b) =  .02210642
      	
      	Fitting reduced model 4:
      	Step 1         f(b) =  .02227238
      	
      	Fitting reduced model 5:
      	Step 1         f(b) =  .01835181
      	
      	Fitting reduced model 6:
      	Step 1         f(b) =  .01389141
      	
      	Fitting reduced model 7:
      	Step 1         f(b) =  .02192478
      	
      	Fitting reduced model 8:
      	Step 1         f(b) =  .02228061
      	
      	Fitting reduced model 9:
      	Step 1         f(b) =   .0193838
      	
      	Fitting reduced model 10:
      	Step 1         f(b) =  .02228957
      	
      	Fitting reduced model 11:
      	Step 1         f(b) =  .01851922
      	
      	Fitting reduced model 12:
      	Step 1         f(b) =  .02220404
      	
      	Fitting reduced model 13:
      	Step 1         f(b) =  .00270867
      	
      	Group variable: id                           Number of obs         =      4402
      	Time variable: year                          Number of groups      =       529
      	
      	Moment conditions:     linear =      29      Obs per group:    min =         1
      	                   nonlinear =       0                        avg =  8.321361
      	                       total =      29                        max =        11
      	
      	                                        (Std. Err. adjusted for 529 clusters in id)
      	------------------------------------------------------------------------------------
      	                  |              DC-Robust
      	        lnexport3 |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
      	-------------------+----------------------------------------------------------------
      	        lnexport3 |
      	              L1. |   .5724923   .0602836     9.50   0.000     .4540672    .6909175
      	                  |
      	           lnulc1 |  -.6139368   .1540915    -3.98   0.000    -.9166446   -.3112291
      	              lnL |   .7445906   .1702559     4.37   0.000     .4101284    1.079053
      	        dlnreal_K |   .0816588   .0346772     2.35   0.019     .0135365     .149781
      	            lnumc |  -.4469151   .1617449    -2.76   0.006    -.7646577   -.1291726
      	                  |
      	lnrealintangible_K |
      	              L1. |  -.0093538   .0119297    -0.78   0.433    -.0327893    .0140817
      	                  |
      	             year |
      	            2010  |   .0766462   .0537471     1.43   0.154    -.0289382    .1822306
      	            2011  |   .1398977   .0476322     2.94   0.003     .0463258    .2334697
      	            2012  |    .103861   .0426544     2.43   0.015     .0200678    .1876543
      	            2013  |   .1476749   .0466095     3.17   0.002      .056112    .2392378
      	            2014  |   .2201735   .0483601     4.55   0.000     .1251717    .3151754
      	            2015  |   .2367494   .0529298     4.47   0.000     .1327706    .3407282
      	            2016  |   .2454128   .0546268     4.49   0.000     .1381002    .3527254
      	            2017  |   .2708834   .0549217     4.93   0.000     .1629914    .3787753
      	            2018  |   .2706555   .0580685     4.66   0.000     .1565819    .3847292
      	            2019  |   .2205907   .0591024     3.73   0.000     .1044859    .3366955
      	                  |
      	            _cons |  -3.575295   .8959527    -3.99   0.000    -5.335364   -1.815225
      	------------------------------------------------------------------------------------
      	Instruments corresponding to the linear moment conditions:
      	1, model(diff):
      	  L2.lnexport3 L3.lnexport3
      	2, model(diff):
      	  L2.lnulc1 L3.lnulc1
      	3, model(diff):
      	  L2.lnL L3.lnL
      	4, model(diff):
      	  L1.dlnreal_K L2.dlnreal_K
      	5, model(diff):
      	  L1.lnumc L2.lnumc
      	6, model(diff):
      	  L1.lnrealintangible_K L2.lnrealintangible_K
      	7, model(level):
      	  L1.D.lnexport3
      	8, model(level):
      	  L1.D.lnulc1
      	9, model(level):
      	  L1.D.lnL
      	10, model(level):
      	  D.dlnreal_K
      	11, model(level):
      	  D.lnumc
      	12, model(level):
      	  D.lnrealintangible_K
      	13, model(level):
      	  2010bn.year 2011.year 2012.year 2013.year 2014.year 2015.year 2016.year
      	  2017.year 2018.year 2019.year
      	14, model(level):
      	  _cons
      	
      	.
      	end of do-file
      	
      	. estat overid, difference
      	
      	Sargan-Hansen (difference) test of the overidentifying restrictions
      	H0: (additional) overidentifying restrictions are valid
      	
      	2-step weighting matrix from full model
      	
      	                 | Excluding                   | Difference                  
      	Moment conditions |       chi2     df         p |        chi2     df         p
      	------------------+-----------------------------+-----------------------------
      	  1, model(diff) |     9.6576     10    0.4710 |      2.1351      2    0.3439
      	  2, model(diff) |     9.5555     10    0.4803 |      2.2372      2    0.3267
      	  3, model(diff) |    11.6943     10    0.3060 |      0.0984      2    0.9520
      	  4, model(diff) |    11.7821     10    0.2999 |      0.0106      2    0.9947
      	  5, model(diff) |     9.7081     10    0.4665 |      2.0845      2    0.3527
      	  6, model(diff) |     7.3486     10    0.6922 |      4.4441      2    0.1084
      	 7, model(level) |    11.5982     11    0.3946 |      0.1944      1    0.6592
      	 8, model(level) |    11.7864     11    0.3799 |      0.0062      1    0.9372
      	 9, model(level) |    10.2540     11    0.5077 |      1.5386      1    0.2148
      	10, model(level) |    11.7912     11    0.3795 |      0.0015      1    0.9695
      	11, model(level) |     9.7967     11    0.5488 |      1.9960      1    0.1577
      	12, model(level) |    11.7459     11    0.3830 |      0.0467      1    0.8289
      	13, model(level) |     1.4329      2    0.4885 |     10.3598     10    0.4095
      If I got it right, all the model(level) p-values (rows 7 - 13) under column Excluding have to be higher than (let's say) 0,1 to say that difference GMM estimator is fine and we may try the SYS GMM. Then we go to the column Difference where we look for both model(diff) and model(level) rows and all of them need to be fine in order to say that SYS-GMM is ok and we can proceed with the analysis. Is my understanding correct?
    4. Is there a way to extract results of Hansen, difference-in-Hansen, underid and AR(2) test from multiple estimations in the same table after applying xtdpdgmm?
    5. When I do continuously updated GMM with the following code the error is returned. Could you please help? I am running Stata 14.0 and the xtdpdgmm is up to date (just checked it):
      Code:
      . xtdpdgmm lnexport3 l.lnexport3 lnulc1 lnL dlnreal_K lnumc l.lnintangible_K, ///
      	> model(diff) collapse gmm(lnexport3, lag(2 5)) ///
      	>                      gmm(lnulc1, lag(2 5)) ///
      	>                                          gmm(lnL, lag(2 5)) ///
      	>                                          gmm(dlnreal_K, lag(2 5)) ///
      	>                                          gmm(lnumc, lag(2 5)) ///
      	>                                          gmm(lnintangible_K, lag (2 5)) ///
      	> ///
      	>                      gmm(lnexport3, lag(1 2) diff model(level)) ///
      	>                                          gmm(lnulc1, lag(1 2) diff model(level)) ///
      	>                                          gmm(lnL, lag(1 2) diff model(level)) ///
      	>                                          gmm(dlnreal_K, lag(1 2) diff model(level)) ///
      	>                                          gmm(lnumc, lag(1 2) diff model(level)) ///
      	>                      gmm(lnintangible_K, lag (1 2) diff model (level)) ///
      	> teffects cu vce(r) overid
      	
      	Generalized method of moments estimation
      	         asarray_keys():  3301  subscript invalid
      	     xtdpdgmm_opt::iv():     -  function returned error
      	   xtdpdgmm_opt::init():     -  function returned error
      	             xtdpdgmm():     -  function returned error
      	                <istmt>:     -  function returned error
    6. When I try Jochmans (2020) portmanteau test (after SYS-GMM estimation from the previous point) I get the following error:

      Code:
      	. estat serialpm
      	    xtdpdgmm_serialpm():  3200  conformability error
      	                <istmt>:     -  function returned error
      	r(3200);
    Bun, M. J., & Windmeijer, F. (2010). The weak instrument problem of the system GMM estimator in dynamic panel data models. The Econometrics Journal, 13Econometric reviews, 19(3), 321-340.


    10. Your presentation (2019) slide 94 says: „If there are concerns about the imprecisely estimated optimal weighting matrix, the one-step GMM estimator with robust standard errors might be used instead.“ How can I know if optimal weighting matrix is imprecisely estimated?

    11. Blundell-Bond assumption esentially states that devations from long-run means must not be correlated with the fixed effects for SYS-GMM to be valid. First, can I check the assumption by doing a graph where dl.lnexport3 is on the x-axis, and residuals are on y-axis and if there is no correlation between those two (for the first lets say few periods t<5, since my T=12), then the B-B assumption is fine (lnexport3 is my left-hand side variable in an estimated model)? Roodman (2009) makes such graphs on page 146, but states „In sum, for the particular case where individuals have a common starting point, the validity of system GMM is equivalent to all having achieven mean stationarity by the study period.“ Does that mean that these graphs do show the violation/validity of B-B assumption only if the individuals (firms) have a common starting point, i.e. we look at the same time period for all individuals? Lastly, do residuals have to be logged, since lnexport3 is a natural logarithm of exports? Roodman, D. (2009). A note on the theme of too many instruments. Oxford Bulletin of Economics and statistics, 71(1), 135-158.
    Last edited by Filip Novinc; 25 Apr 2023, 03:15.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    You can use the community-contributed ivreg2 command. Slides 39 to 42 of my 2019 London Stata Conference presentation provide an example of how to replicate xtdpdgmm results with ivreg2. You can then simply amend the latter command to obtain Driscoll-Kraay standard errors (option dkraay()).

    Leave a comment:


  • Mugabil Isayev
    replied
    Dear Sebastian Kripfganz,

    Thank you for the response. Is there any command in Stata where I can incorporate Driscoll Kraay SEs into GMM?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    These two commands do not compute Driscoll Kraay standard errors, sorry.

    Leave a comment:


  • Mugabil Isayev
    replied
    Dear Sebastian Kripfganz,

    I want to simultaneously take into account cross-sectional dependence and endogeneity problems. Is it possible to incorporate Driscoll Kraay standard errors to xtdpdgmm or xtabond2?

    Thanks in advance.

    Leave a comment:

Working...
X