Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The diff suboption applies a first-difference transformation to the instruments (not the model). It the untransformed variables X1 X2 have a constant correlation over time with the unobserved effects, their first differences D.X1 D.X2 will be uncorrelated with the unobserved effects. Thus, differencing the instruments would be necessary to obtain valid instruments.
    https://twitter.com/Kripfganz

    Comment


    • Ok, Prof. Kripfganz. So how does one determine whether X1 and X2 have a constant correlation over time with the unobserved effects? Moreover, is it a standard practice to do differencing of the instruments (i.e. by using the diff option) to obtain valid instruments?

      Comment


      • Yes, it is standard practice to use differenced instruments for the model in levels; see e.g. slides 30 and following in my 2019 London Stata Conference presentation.

        A conventional specification check for the constant correlation over time would be a difference-in-Hansen test for the validity of the respective instruments for the level model; slides 48 and following in my presentation.
        https://twitter.com/Kripfganz

        Comment


        • Thanks a lot, Prof. Kripfganz. I have another question. For the output mentioned below, please let me know why the AR(2) coefficient was not calculated.

          Code:
          Generalized method of moments estimation
          
          Fitting full model:
          Step 1         f(b) =  5555652.4
          Step 2         f(b) =  .04704686
          
          Group variable: CompanyID                    Number of obs         =      1170
          Time variable: Year                          Number of groups      =       396
          
          Moment conditions:     linear =      21      Obs per group:    min =         1
                              nonlinear =       0                        avg =  2.954545
                                  total =      21                        max =         3
          
                                      (Std. Err. adjusted for 396 clusters in CompanyID)
          ------------------------------------------------------------------------------
                       |              WC-Robust
                   PAT |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                   PAT |
                   L1. |    .973882   .1321274     7.37   0.000     .7149171    1.232847
                       |
                   Lev |  -4838.181   2017.099    -2.40   0.016    -8791.623    -884.739
             Log_sales |  -80.49196   287.9182    -0.28   0.780    -644.8013    483.8174
                       |
                  Year |
                 2014  |  -130.4292   148.6886    -0.88   0.380    -421.8534     160.995
                 2015  |   134.1587   180.3201     0.74   0.457    -219.2622    487.5795
                       |
                 _cons |   2106.412    2522.11     0.84   0.404    -2836.834    7049.657
          ------------------------------------------------------------------------------
          
          . estat serial
          
          Arellano-Bond test for autocorrelation of the first-differenced residuals
          H0: no autocorrelation of order 1:     z =   -3.3944   Prob > |z|  =    0.0007
          H0: no autocorrelation of order 2:     z =         .   Prob > |z|  =         .

          Comment


          • You need at least 4 time periods to calculate the AR(2) test statistic. With just 3 time periods (in levels), you only have 2 time periods in first differences, such that the second lag cannot be computed.
            https://twitter.com/Kripfganz

            Comment


            • Thank you so much, Prof. Kripfganz. I shall try to increase the time periods.

              Comment


              • I received a question by e-mail which is easier to answer here on the forum, and which might be of interest to others:
                How can we check the cross-sectional dependence after estimating the SYS-GMM with xtpdgmm?
                According to your presentation, we can only do diagnostic checks with estat serial and estat overid. I wonder if we can run the Sarafidis et al. (2009) testing procedure for error cross-section dependence after estimating SYS-GMM with xtpdgmm.
                Let us start with a DIF-GMM estimator. The proposed test for cross-sectional dependence is then simply an incremental Hansen test for the validity of the moment conditions for the lagged dependent variable, and it can be implemented using xtdpdgmm as follows:
                Code:
                . webuse abdata
                
                . xtdpdgmm L(0/1).n L(0/1).(w k), gmm(L.n, lag(1 5) model(diff)) gmm(w k, lag(2 6) model(diff)) teffects collapse twostep vce(robust) overid
                
                (estimation output partially omitted)
                ------------------------------------------------------------------------------
                Instruments corresponding to the linear moment conditions:
                 1, model(diff):
                   L1.L.n L2.L.n L3.L.n L4.L.n L5.L.n
                 2, model(diff):
                   L2.w L3.w L4.w L5.w L6.w L2.k L3.k L4.k L5.k L6.k
                 3, model(level):
                   1978bn.year 1979.year 1980.year 1981.year 1982.year 1983.year 1984.year
                 4, model(level):
                   _cons
                
                . estat overid, difference
                
                Sargan-Hansen (difference) test of the overidentifying restrictions
                H0: (additional) overidentifying restrictions are valid
                
                2-step weighting matrix from full model
                
                                  | Excluding                   | Difference                  
                Moment conditions |       chi2     df         p |        chi2     df         p
                ------------------+-----------------------------+-----------------------------
                   1, model(diff) |     9.1619      5    0.1028 |      2.9311      5    0.7106
                   2, model(diff) |     0.0000      0         . |     12.0930     10    0.2789
                  3, model(level) |     3.7749      3    0.2868 |      8.3180      7    0.3054
                      model(diff) |          .     -5         . |           .      .         .
                The relevant test is the "Difference" test for the first set of instruments, labeled "1, model(diff)". If there was cross-sectional dependence, we would expect this test to reject the null hypothesis. Here, this is not the case given that we have a p-value of 0.71. (Note that this test is only applicable if the "Excluding" test does not reject the null hypothesis, which is the case here.)

                With a system GMM estimator, the test requires unfortunately a bit more effort because it involves jointly testing the validity of the moment conditions for the lagged dependent variable for the differenced and the level model. This is not (yet) directly possible with the above procedure. However, there is a workaround which involves estimating the full model and the reduced model (i.e. the model without the instruments under investigation) separately:
                Code:
                . xtdpdgmm L(0/1).n L(0/1).(w k), gmm(L.n, lag(1 5) model(diff)) gmm(w k, lag(2 6) model(diff)) gmm(L.n, lag(0 0) model(level)) gmm(w k, lag(1 1) model(level)) teffects collapse twostep vce(robust) overid
                
                (estimation output partially omitted)
                ------------------------------------------------------------------------------
                Instruments corresponding to the linear moment conditions:
                 1, model(diff):
                   L1.L.n L2.L.n L3.L.n L4.L.n L5.L.n
                 2, model(diff):
                   L2.w L3.w L4.w L5.w L6.w L2.k L3.k L4.k L5.k L6.k
                 3, model(level):
                   L.n
                 4, model(level):
                   L1.w L1.k
                 5, model(level):
                   1978bn.year 1979.year 1980.year 1981.year 1982.year 1983.year 1984.year
                 6, model(level):
                   _cons
                
                . estat overid, difference
                
                Sargan-Hansen (difference) test of the overidentifying restrictions
                H0: (additional) overidentifying restrictions are valid
                
                2-step weighting matrix from full model
                
                                  | Excluding                   | Difference                  
                Moment conditions |       chi2     df         p |        chi2     df         p
                ------------------+-----------------------------+-----------------------------
                   1, model(diff) |    10.3512      8    0.2412 |      2.9586      5    0.7064
                   2, model(diff) |     1.0343      3    0.7930 |     12.2755     10    0.2670
                  3, model(level) |    12.9565     12    0.3722 |      0.3533      1    0.5523
                  4, model(level) |    12.2839     11    0.3427 |      1.0259      2    0.5987
                  5, model(level) |     8.8893      6    0.1799 |      4.4204      7    0.7303
                      model(diff) |          .     -2         . |           .      .         .
                     model(level) |     3.7790      3    0.2863 |      9.5308     10    0.4826
                
                . estimates store full
                
                . xtdpdgmm L(0/1).n L(0/1).(w k), gmm(w k, lag(2 6) model(diff)) gmm(w k, lag(1 1) model(level)) teffects collapse twostep vce(robust)
                
                (estimation output partially omitted)
                ------------------------------------------------------------------------------
                Instruments corresponding to the linear moment conditions:
                 1, model(diff):
                   L2.w L3.w L4.w L5.w L6.w L2.k L3.k L4.k L5.k L6.k
                 2, model(level):
                   L1.w L1.k
                 3, model(level):
                   1978bn.year 1979.year 1980.year 1981.year 1982.year 1983.year 1984.year
                 4, model(level):
                   _cons
                
                . estat overid
                
                Sargan-Hansen test of the overidentifying restrictions
                H0: overidentifying restrictions are valid
                
                2-step moment functions, 2-step weighting matrix       chi2(7)     =   10.4055
                                                                       Prob > chi2 =    0.1667
                (postestimation output partially omitted)
                
                . estat overid full
                
                Sargan-Hansen difference test of the overidentifying restrictions
                H0: additional overidentifying restrictions are valid
                
                2-step moment functions, 2-step weighting matrix       chi2(6)     =    2.9043
                                                                       Prob > chi2 =    0.8208
                (postestimation output partially omitted)
                
                . estat hausman full
                
                Generalized Hausman test                               chi2(6)     =    3.7445
                H0: coefficients do not systematically differ          Prob > chi2 =    0.7112
                After estimating the full model, I first checked for any sign of misspecification of the level moment conditions, which could indicate that the additional Blundell-Bond assumption for the SYS-GMM estimator might not be satisfied. Here, all the p-values from the Difference-in-Hansen tests are acceptable.
                After estimation the reduced model, the first estat overid command checks for any misspecification in the reduced model. This is similar to the "Excluding" test in the earlier DIF-GMM example. Correct specification of the reduced model is again a prerequisite for the subsequent tests. For the next estat overid command, I have supplied the name of the stored estimation results from the full model. This postestimation command now computes a Difference-in-Hansen test by simply taking the difference of the two Hansen overidentification test statistics from the two models. This is conceptually the same as the "Difference" test in the earlier example, just that the two test statistics to be compared here are based on separate estimates of the variance-covariance matrix, while in the earlier example only the variance estimates from the full model were used. Asymptotically, both approaches are equivalent.
                Finally, the generalized Hausman test again compares the two models using the Hausman principle as an alternative to the Difference-in-Hansen test. However, the Hausman test tends to have poor finite-sample performance. In our case here, neither of the two tests rejects the null hypothesis that the full model is correctly specified. If there was evidence of cross-sectional dependence, we would expect to see a rejection by these tests.
                https://twitter.com/Kripfganz

                Comment


                • Dear Prof. Kripfganz,


                  I tried to check my Two-sys-GMM model with the diagnostic checks you posted in #367. My GMM type variables are Y, X2, X3, X5 and their lags' range is set from one to three. The standard type instrumental variables are first, second, and third lags of X1, X4, and X6.
                  I have the following outcome. The P-values of estat overid are significant, showing that my model is misspecified. Also, the Hausman test shows that I have cross-sectional dependence. What would you recommend to solve the two problems?



                  . xtdpdgmm L(0/1).Y X1 X2 X3 X4 X5 X6, model(diff) collapse gmm(Y X2 X3 X5, lag(1 3)) gmm(X1 X4 X6, lag(1 3)) gmm(Y X2 X3 X5, lag(1 1) diff model(level)) gmm(X1 X4 X6, lag (0 0) diff model (level)) two vce(r) overid

                  Generalized method of moments estimation

                  Fitting full model:
                  Step 1 f(b) = .01275011
                  Step 2 f(b) = .94657417

                  Fitting reduced model 1:
                  Step 1 f(b) = .77326433

                  Fitting reduced model 2:
                  Step 1 f(b) = .84751107

                  Fitting reduced model 3:
                  Step 1 f(b) = .86874136

                  Fitting reduced model 4:
                  Step 1 f(b) = .89973548

                  Fitting no-diff model:
                  Step 1 f(b) = 2.604e-09

                  Fitting no-level model:
                  Step 1 f(b) = .78165801

                  Group variable: iso_num Number of obs = 336
                  Time variable: year Number of groups = 28

                  Moment conditions: linear = 29 Obs per group: min = 12
                  nonlinear = 0 avg = 12
                  total = 29 max = 12

                  (Std. Err. adjusted for 28 clusters in iso_num)
                  WC-Robust
                  Y Coef. Std. Err. z P>z [95% Conf. Interval]
                  Y
                  L1. .7967178 .0652365 12.21 0.000 .6688566 .924579
                  X1 -.1074083 .0301788 -3.56 0.000 -.1665576 -.048259
                  X2 .0900069 .0329441 2.73 0.006 .0254376 .1545763
                  X3 -.0131671 .0110706 -1.19 0.234 -.034865 .0085307
                  X4 .226282 .0839039 2.70 0.007 .0618333 .3907307
                  X5 -.0476106 .0942898 -0.50 0.614 -.2324151 .1371939
                  X6 -.1638557 .0978363 -1.67 0.094 -.3556113 .0278999
                  _cons -2.162016 .5205269 -4.15 0.000 -3.18223 -1.141802
                  Instruments corresponding to the linear moment conditions:
                  1, model(diff):
                  L1.Y L2.Y L3.Y L1.X2 L2.X2 L3.X2 L1.X3 L2.X3 L3.X3 L1.X5 L2.X5 L3.X5
                  2, model(diff):
                  L1.X1 L2.X1 L3.X1 L1.X4 L2.X4 L3.X4 L1.X6 L2.X6 L3.X6
                  3, model(level):
                  L1.D.Y L1.D.X2 L1.D.X3 L1.D.X5
                  4, model(level):
                  D.X1 D.X4 D.X6
                  5, model(level):
                  _cons

                  . estat overid, difference

                  Sargan-Hansen (difference) test of the overidentifying restrictions
                  H0: (additional) overidentifying restrictions are valid

                  2-step weighting matrix from full model

                  | Excluding | Difference
                  Excluding Difference
                  Moment conditions chi2 df p chi2 df p
                  1, model(diff) 21.6514 9 0.0101 4.8527 12 0.9627
                  2, model(diff) 23.7303 12 0.0221 2.7738 9 0.9726
                  3, model(level) 24.3248 17 0.1109 2.1793 4 0.7028
                  4, model(level) 25.1926 18 0.1197 1.3115 3 0.7264
                  model(diff) 0.0000 0 . 26.5041 21 0.1879
                  model(level) 21.8864 14 0.0810 4.6177 7 0.7065
                  . estimates store full

                  . xtdpdgmm L(0/1).Y X1 X2 X3 X4 X5 X6, model(diff) collapse gmm(X1 X4 X6, lag(1 3)) gmm(X1 X4 X6, lag(0 0) diff model (level)) two vce(r)

                  Generalized method of moments estimation

                  Fitting full model:
                  Step 1 f(b) = .0013473
                  Step 2 f(b) = .35246126

                  Group variable: iso_num Number of obs = 336
                  Time variable: year Number of groups = 28

                  Moment conditions: linear = 13 Obs per group: min = 12
                  nonlinear = 0 avg = 12
                  total = 13 max = 12

                  (Std. Err. adjusted for 28 clusters in iso_num)
                  WC-Robust
                  Y Coef. Std. Err. z P>z [95% Conf. Interval]
                  Y
                  L1. .6915241 .3140208 2.20 0.028 .0760547 1.306994
                  X1 -.0796047 .0434806 -1.83 0.067 -.1648251 .0056158
                  X2 .1766118 .1830601 0.96 0.335 -.1821794 .535403
                  X3 .0940592 .0302717 3.11 0.002 .0347278 .1533906
                  X4 .1856448 .1214403 1.53 0.126 -.0523739 .4236635
                  X5 -.0955597 .1559156 -0.61 0.540 -.4011488 .2100293
                  X6 -.1655075 .653214 -0.25 0.800 -1.445784 1.114769
                  _cons -2.94722 2.484702 -1.19 0.236 -7.817146 1.922706
                  Instruments corresponding to the linear moment conditions:
                  1, model(diff):
                  L1.X1 L2.X1 L3.X1 L1.X4 L2.X4 L3.X4 L1.X6 L2.X6 L3.X6
                  2, model(level):
                  D.X1 D.X4 D.X6
                  3, model(level):
                  _cons

                  . estat overid

                  Sargan-Hansen test of the overidentifying restrictions
                  H0: overidentifying restrictions are valid

                  2-step moment functions, 2-step weighting matrix chi2(5) = 9.8689
                  Prob > chi2 = 0.0790

                  2-step moment functions, 3-step weighting matrix chi2(5) = 14.9084
                  Prob > chi2 = 0.0108

                  . estat overid full

                  Sargan-Hansen difference test of the overidentifying restrictions
                  H0: additional overidentifying restrictions are valid

                  2-step moment functions, 2-step weighting matrix chi2(16) = 16.6352
                  Prob > chi2 = 0.4096

                  2-step moment functions, 3-step weighting matrix chi2(16) = 13.0916
                  Prob > chi2 = 0.6661

                  . estat hausman full

                  Generalized Hausman test chi2(7) = 30.3760
                  H0: coefficients do not systematically differ Prob > chi2 = 0.0001
                  Last edited by Sarah Magd; 26 Feb 2022, 02:21.

                  Comment


                  • Hello sebastian. How does one test for presence of endogeneity when using the xtdpdgmm command.

                    Comment


                    • Dear Sebastian,

                      Thanks for developing the xtdgdgmm package which is very helpful for researchers. I have a question regarding serial correlation test.

                      Regarding the specification test of serial correlation, for example
                      Code:
                      estat serial, ar(1/3)
                      presents the serial correlation test of first differenced error terms from one order lagged to three order lagged.

                      However, if I use FOD transformation
                      Code:
                      model(fodev)
                      can I still use
                      Code:
                      estat serial
                      for the correlation test ? Or there is even no need for conducting serial correlation test for FOD-GMM models as in page 66/128 of your tutorial slides, FOD transformation yields serially uncorrelated errors in the FOD form.

                      Thank you and look forward to hearing from you.

                      Best
                      Jerry

                      Comment


                      • Sarah Magd
                        A rejection of the test outlined in my post #367 above does not provide conclusive evidence for cross-sectional dependence. If there is cross-sectional dependence, we would expect to reject the null hypothesis. However, rejecting the null hypothesis could be for many other reasons than cross-sectional dependence. In your case, the instruments gmm(Y, lag(1 3) model(diff)) are invalid by construction of the model. The first lag of Y cannot be a valid instrument for the first-differenced model because it is correlated with the first-differenced error term. You need to start with lag 2 (or lag 1 of L.Y). This misspecification would already explain the rejection of the test.

                        Wycliff Ombuki
                        You could use a difference-in-Hansen test to check for endogeneity against weak or strict exogeneity. In a very simplified model, let X be a regressor for which you are unsure whether it should be specified as endogenous or predetermined. You would then specify separately the additional instrument which is only valid under predeterminedness and check the difference-in-Hansen test for this instrument:
                        Code:
                        xtdpdgmm Y X, model(diff) gmm(X, lag(2 .)) gmm(X, lag(1 1)) twostep vce(robust) overid
                        estat overid, difference
                        In a next step, you could add gmm(X, lag(0 0)) to test weak versus strict exogeneity. See the section on Model Selection in my 2019 London Stata Conference presentation.

                        Jerry Kim
                        The serial correlation test is still applicable after estimating the model with the FOD transformation. The test is still carried out for the first-differenced residuals. It is still relevant because the FOD transformation only yields serially uncorrelated errors if the untransformed errors were serially uncorrelated as well (which we test by checking for absence of second- and higher-order serial correlation in first differences).
                        https://twitter.com/Kripfganz

                        Comment


                        • Dear Prof. Kripfganz,
                          Thanks a lot for the kind reply.
                          I am estimating a Two-step sys-GMM model. The GMM variables are Y, X2, X3, and X5. The standard type instrumental variables are X1, X4, and X6. I am interested in measuring the effect of X1 on Y, while controlling for X2 - X6. If I follow slide 36 in your presentation, I will have the following code:
                          xtdpdgmm L(0/1).Y X1 X2 X3 X4 X5 X6, model(diff) collapse gmm(Y X2 X3 X5, lag(2 4)) gmm(X1 X4 X6, lag(1 3)) gmm(L.Y X2 X3 X5, lag(1 1) diff model(level)) gmm(X1 X4 X6, lag (0 0) diff model (level)) two vce(r) overid


                          Group variable: iso_num Number of obs = 336
                          Time variable: year Number of groups = 28

                          Moment conditions: linear = 29 Obs per group: min = 12
                          nonlinear = 0 avg = 12
                          total = 29 max = 12

                          (Std. Err. adjusted for 28 clusters in iso_num)
                          ------------------------------------------------------------------------------
                          | WC-Robust
                          Y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
                          -------------+----------------------------------------------------------------
                          Y |
                          L1. | .750354 .0993251 7.55 0.000 .5556804 .9450276
                          |
                          X1 | -.0971481 .026813 -3.62 0.000 -.1497006 -.0445957
                          X2 | .1206079 .0417213 2.89 0.004 .0388356 .2023802
                          X3 | -.0106342 .0094052 -1.13 0.258 -.029068 .0077996
                          X4 | .1750596 .0703999 2.49 0.013 .0370783 .3130409
                          X5 | -.0485218 .0645342 -0.75 0.452 -.1750065 .077963
                          X6 | -.0862296 .0877262 -0.98 0.326 -.2581698 .0857107
                          _cons | -2.368251 .7525372 -3.15 0.002 -3.843196 -.8933049
                          ------------------------------------------------------------------------------

                          ################################################## #########################################
                          However, If I follow the code you have written in #367, I will write it as:
                          xtdpdgmm L(0/1).Y X1 X2 X3 X4 X5 X6, gmm(L.Y X2 X3 X5, lag(1 3) model(diff)) gmm(X1 X4 X6, lag(2 4) model(diff)) gmm(L.Y X2 X3 X5, lag(0 0) model(level)) gmm(X1 X4 X6, lag(1 1) model(level)) collapse twostep vce(robust) overid
                          Group variable: iso_num Number of obs = 336
                          Time variable: year Number of groups = 28

                          Moment conditions: linear = 29 Obs per group: min = 12
                          nonlinear = 0 avg = 12
                          total = 29 max = 12

                          (Std. Err. adjusted for 28 clusters in iso_num)
                          ------------------------------------------------------------------------------
                          | WC-Robust
                          Y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
                          -------------+----------------------------------------------------------------
                          Y |
                          L1. | .9801545 .0080521 121.73 0.000 .9643727 .9959363
                          |
                          X1 | .0113918 .0145576 0.78 0.434 -.0171407 .0399242
                          X2 | .0061909 .0067092 0.92 0.356 -.0069589 .0193408
                          X3 | -.027684 .0097786 -2.83 0.005 -.0468496 -.0085184
                          X4 | .0029569 .0073707 0.40 0.688 -.0114895 .0174032
                          X5 | .0124367 .0095314 1.30 0.192 -.0062445 .0311179
                          X6 | -.0314285 .0125523 -2.50 0.012 -.0560305 -.0068264
                          _cons | -.1772037 .1035587 -1.71 0.087 -.3801751 .0257677
                          ------------------------------------------------------------------------------
                          ################################################## ############################################
                          As you see, the two codes give different results for my variables of interest (i.e., X1).
                          I am very confused. Would you please guide me on which command I should use?



                          Last edited by Sarah Magd; 28 Feb 2022, 06:22.

                          Comment


                          • The two commands differ in the following way of how instruments are specified:

                            Command 1:
                            • Lags 2-4 of X2 X3 X5 are used for the first-differenced model, and their first-differenced lag 1 for the level model: This is consistent with treating those variables as endogenous.
                            • Lags 1-3 of X1 X4 X6 are used for the first-differenced model, and their first-differenced lag 0 for the level model: This is consistent with treating those variables as predetermined.
                            • The first-differenced lag 2 of Y (equivalently, lag 1 of L.Y) is used for the level model. This is not normally done. The usual approach would be to use the first-differenced lag 1 of Y (or lag 0 of L.Y), treating L.Y as predetermined, as in command 2.

                            Command 2:
                            • Lags 1-3 of X2 X3 X5 are used for the first-differenced model, and their first-differenced lag 0 for the level model: This is consistent with treating those variables as predetermined.
                            • Lags 2-4 of X1 X4 X6 are used for the first-differenced model, and their first-differenced lag 1 for the level model: This is consistent with treating those variables as endogenous.
                            As you can see, the two commands specify quite different models regarding the assumptions on the regressors. It is thus not surprising that the results differ. You need to classify your variables first into endogenous, predetermined, and possibly even strictly exogenous, and then specify the instruments accordingly. The lagged dependent variable L.Y should normally be treated as predetermined (equivalently, the dependent variable Y itself is endogenous).
                            https://twitter.com/Kripfganz

                            Comment


                            • An update for xtdpdgmm to version 2.3.10 is now available on my personal website:
                              Code:
                              net install xtdpdgmm, from(http://www.kripfganz.de/stata) replace
                              This update adds the option nolevel with the following implications:
                              • The option nolevel changes the default for the model() option to model(difference). Any gmm() and iv() instruments explicitly specified for model(level) will be ignored. (An intercept is still estimated for the level model unless option noconstant is specified as well.)
                              • If time effects are added with option teffects, the option nolevel implies that they will be instrumented with standard instruments for the transformed model (as specified with option model()) instead of the level model. While time dummies are always valid instruments in the untransformed model, specifying them for the transformed model instead can still be useful to exactly replicate results from a "difference-GMM" estimator (in unbalanced panels).
                              • In combination with option small, the option nolevel corrects the small-sample standard-error adjustment for the reduction of time periods in the transformed model or the absorbed group-specific effects.
                              With the latter modification, xtdpdgmm can now replicate the standard errors of the fixed-effects estimator in a balanced panel:
                              Code:
                              webuse psidextract
                              
                              regress lwage wks i.id
                              xtreg lwage wks, fe
                              xtdpdgmm lwage wks, iv(wks) model(mdev) norescale small nolevel
                              With robust standard errors, this applies only to the LSDV estimator implemented with regress:
                              Code:
                              regress lwage wks i.id, vce(cluster id)
                              xtdpdgmm lwage wks, iv(wks) model(mdev) norescale small nolevel vce(robust)
                              To replicate the xtreg results (besides the standard error for the constant), the new nolevel option must not be used due to an inconsistency in the way xtreg computes its standard errors:
                              Code:
                              xtreg lwage wks, fe vce(robust)
                              xtdpdgmm lwage wks, iv(wks) model(mdev) norescale small vce(robust)
                              For unbalanced panels, the equivalence breaks down due to slightly different ways in which the variance parameter is estimated.

                              With the nolevel option, we can now also replicate the robust standard errors for a first-difference regression:
                              Code:
                              regress D.(lwage wks), nocons vce(cluster id)
                              xtdpdgmm lwage wks, nocons iv(wks, difference) model(difference) small nolevel vce(robust)
                              Without robust standard errors, the standard errors differ by a factor of sqrt(2) because regress ignores the first-order serial correlation in the first-differenced errors when estimating the variance parameter. For a similar reason, xtdpdgmm needs to be run with option wmatrix(separate) to replicate the results:
                              Code:
                              regress D.(lwage wks), nocons
                              xtdpdgmm lwage wks, nocons iv(wks, difference) model(difference) wmatrix(separate) small nolevel
                              As always, comments are welcome.
                              https://twitter.com/Kripfganz

                              Comment


                              • Hi @Sebastian,
                                Does this command
                                1. accept margins, factorial notations,

                                2 has limits for T dimensions and

                                3 can be adapted for estimation of quadratic equation?

                                Do you think it will be suitable for my case here?
                                https://www.statalist.org/forums/for...tegar-vs-other

                                Comment

                                Working...
                                X