Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sebastian Kripfganz
    replied
    I am afraid there was another bug in xtdpdgmm that is now fixed with the latest update to version 2.3.8:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata) replace
    This bug could lead to an unexpected error message or incorrect results from postestimation commands after estimating a model with nonlinear moment conditions.

    Thanks to Tiyo Ardiyono for reporting this problem.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    With the usual thanks to Kit Baum, the latest version 2.3.7 of xtdpdgmm with all the bug fixes mentioned here over the last year is now also available on SSC.
    Code:
    adoupdate xtdpdgmm, update

    Leave a comment:


  • Kayode Olaide
    replied
    Thank you so much for your response. I don't seem to know how to start a new post on this platform. I actually posted my query here because I could not figure that out .

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Originally posted by Kayode Olaide View Post
    I'm using the CCE estimation technique for a research work. My dataset consists of eight cross-sectional units (N) and 24 time (T) dimensions in each cross-sectional unit, and I'm using Stata for my estimation. The cross-sectional dependence test shows that the panels are cross-sectionally dependent. Also, the variables have mixed order of integration (stationarity), i.e., I(0) and I(1). Both the CCEMG and CCEPMG don't seem to be quite appropriate for my dataset. Please, I need help finding a suitable estimation technique, and will really appreciate suggestions. Thank you.
    This does not seem to be the right topic for your query. GMM estimators for dynamic panel data models are typically designed for large-N, small-T panels. Your data does not appear to be suitable. Please start a new topic with an informative title, so that others can help as well. In any case, with such a small N, you cannot really account for common correlated effects unless you have some appropriate proxy variables for them (i.e. "global" variables that are constant across units).

    Leave a comment:


  • Kayode Olaide
    replied
    I'm using the CCE estimation technique for a research work. My dataset consists of eight cross-sectional units (N) and 24 time (T) dimensions in each cross-sectional unit, and I'm using Stata for my estimation. The cross-sectional dependence test shows that the panels are cross-sectionally dependent. Also, the variables have mixed order of integration (stationarity), i.e., I(0) and I(1). Both the CCEMG and CCEPMG don't seem to be quite appropriate for my dataset. Please, I need help finding a suitable estimation technique, and will really appreciate suggestions. Thank you.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    The rejections of the AR(3) and the Hansen test indicate that the model is still potentially misspecified, e.g. further lags could be needed or some other relevant variables might be omitted. Often, the tests do not provide any specific guidance about the source of the problem. You could use difference-in-Hansen tests to see if there is a problem with any particular variable. (See for example the section on "Model Selection" in my 2019 London Stata Conference presentation.) I would still recommend not to use all available lags of yield_mtha yield_dev. Restricting the lag length might improve the reliability of the specification tests.

    Regarding the magnitude of the bias for the coefficient \(\rho\) of the lagged dependent variable (when there is only one lag), it can be approximated as \(-(1+\rho) / (T-1)\). Thus, e.g. for \(\rho = 0.6\) and \(T = 28\), we would get a bias of approximately -0.06. This may still be too large to be tolerated. There is no specific rule of thumb.

    Leave a comment:


  • Jason Xiao
    replied
    Dear Sebastian,

    Thank you for your response. When I add another lagged dependent variable to the model, the AR(2) test is satisfied. However, AR(3) AND both the Sargan and Hansen test are not satisfied.


    Code:
     xtdpdgmm yield_mtha L.yield_mtha L2.yield_mtha rs_gdd_s2 rs_hdd_s2 rs_precip_s2, ///
    > model(diff) gmm(yield_mtha yield_dev, lag(3 .)) iv(rs_gdd_s2 rs_hdd_s2 rs_precip_s2, model(mdev)) two  coll vce(r)  
    
    Generalized method of moments estimation
    
    Fitting full model:
    Step 1         f(b) =  .61599272
    Step 2         f(b) =   .7763858
    
    Group variable: code_muni                    Number of obs         =     12798
    Time variable: year                          Number of groups      =       474
    
    Moment conditions:     linear =      56      Obs per group:    min =        27
                        nonlinear =       0                        avg =        27
                            total =      56                        max =        27
    
                                (Std. Err. adjusted for 474 clusters in code_muni)
    ------------------------------------------------------------------------------
                 |              WC-Robust
      yield_mtha |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
      yield_mtha |
             L1. |   .3454116    .047806     7.23   0.000     .2517135    .4391097
             L2. |   .2802068   .0349527     8.02   0.000     .2117008    .3487128
                 |
       rs_gdd_s2 |   .0898764   .0793811     1.13   0.258    -.0657077    .2454606
       rs_hdd_s2 |  -2.144849   1.132075    -1.89   0.058    -4.363676     .073978
    rs_precip_s2 |   .1919315   .0210945     9.10   0.000     .1505871     .233276
           _cons |   .0627011    .178586     0.35   0.726    -.2873209    .4127232
    ------------------------------------------------------------------------------
    Instruments corresponding to the linear moment conditions:
     1, model(diff):
       L3.yield_mtha L4.yield_mtha L5.yield_mtha L6.yield_mtha L7.yield_mtha
       L8.yield_mtha L9.yield_mtha L10.yield_mtha L11.yield_mtha L12.yield_mtha
       L13.yield_mtha L14.yield_mtha L15.yield_mtha L16.yield_mtha L17.yield_mtha
       L18.yield_mtha L19.yield_mtha L20.yield_mtha L21.yield_mtha L22.yield_mtha
       L23.yield_mtha L24.yield_mtha L25.yield_mtha L26.yield_mtha L27.yield_mtha
       L28.yield_mtha L3.yield_dev L4.yield_dev L5.yield_dev L6.yield_dev
       L7.yield_dev L8.yield_dev L9.yield_dev L10.yield_dev L11.yield_dev
       L12.yield_dev L13.yield_dev L14.yield_dev L15.yield_dev L16.yield_dev
       L17.yield_dev L18.yield_dev L19.yield_dev L20.yield_dev L21.yield_dev
       L22.yield_dev L23.yield_dev L24.yield_dev L25.yield_dev L26.yield_dev
       L27.yield_dev L28.yield_dev
     2, model(mdev):
       rs_gdd_s2 rs_hdd_s2 rs_precip_s2
     3, model(level):
       _cons
    
    . 
    . estat serial, ar(1/3) 
    
    Arellano-Bond test for autocorrelation of the first-differenced residuals
    H0: no autocorrelation of order 1:     z =   -8.8835   Prob > |z|  =    0.0000
    H0: no autocorrelation of order 2:     z =    1.4192   Prob > |z|  =    0.1559
    H0: no autocorrelation of order 3:     z =   -3.3863   Prob > |z|  =    0.0007
    
    . estat overid
    
    Sargan-Hansen test of the overidentifying restrictions
    H0: overidentifying restrictions are valid
    
    2-step moment functions, 2-step weighting matrix       chi2(50)    =  368.0069
                                                           Prob > chi2 =    0.0000
    
    2-step moment functions, 3-step weighting matrix       chi2(50)    =  375.7389
                                                           Prob > chi2 =    0.0000
    Since you mentioned that I have relatively large T and the dynamic panel bias could be small, what is a rule-of-thumb in determining whether you have enough T to ingore such bias? Moreover, would you worry about the inconsistent estimator if we measure this dynamic panel using FE? I am attaching the FE estimates here for your reference.


    Code:
    . areg yield_mtha L.yield_mtha L2.yield_mtha rs_gdd_s2 rs_hdd_s2 rs_precip_s2, absorb (code_muni) vce(cluster code_muni)
    
    Linear regression, absorbing indicators         Number of obs     =     12,798
                                                    F(   5,    473)   =     353.32
                                                    Prob > F          =     0.0000
                                                    R-squared         =     0.4831
                                                    Adj R-squared     =     0.4631
                                                    Root MSE          =     0.4588
    
                                (Std. Err. adjusted for 474 clusters in code_muni)
    ------------------------------------------------------------------------------
                 |               Robust
      yield_mtha |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
      yield_mtha |
             L1. |   .3292273   .0200585    16.41   0.000     .2898126    .3686421
             L2. |   .2575985   .0157955    16.31   0.000     .2265606    .2886365
                 |
       rs_gdd_s2 |   .0939887   .0570919     1.65   0.100    -.0181964    .2061737
       rs_hdd_s2 |  -2.253687   .7521274    -3.00   0.003    -3.731612   -.7757632
    rs_precip_s2 |   .1822983   .0145616    12.52   0.000     .1536849    .2109116
           _cons |    .128067   .1301362     0.98   0.326    -.1276496    .3837836
    -------------+----------------------------------------------------------------
       code_muni |   absorbed                                     (474 categories)

    Thank you for suggesting the MLE and bias-corrected estimator approach. I will look into these paper as well.


    Originally posted by Sebastian Kripfganz View Post
    First of all, your number of instruments (2728) is way too large relative to the number of groups. While you are using the asymptotically optimal set of instruments, in finite samples we need to ensure that the number of instruments is reasonably small to avoid biases and unreliable test results. Common strategies to reduce the number of instruments include curtailing (i.e. setting a maximum lag order for the instruments) and collapsing (i.e. turning GMM-style moment conditions that are separate for every time period into standard moment conditions that are summations over all time periods). Given that you have a relatively large number of time periods, curtailing definitely makes sense because far lags are unlikely to be strong instruments. Unless you have a very large number of groups (in the thousands) or a very small number of time periods, collapsing usually doesn't do any harm either.

    You implicitly assumed that all your independent variables (besides the lagged dependent variable) are strictly exogenous, i.e. uncorrelated with all future and past idiosyncratic errors. While leads are valid instruments in that case, this is hardly done in practice. You could further reduce the number of instruments by starting with lag 0 (the first argument of the lag() option). Moreover, it would be sufficient to simply instrument the strictly exogenous variables by themselves for the model with a mean-deviations transformation (the same as for the conventional fixed-effects estimator), i.e. iv(rs_gdd_s2 rs_hdd_s2 rs_precip_s2, model(mdev)).

    Given your relatively large number of time periods and the strict exogeneity assumption for the independent variables, you may not even need a GMM estimator at all, as the dynamic panel data bias of the conventional fixed-effects estimator might be sufficiently small. If you still worry about the bias, a maximum likelihood estimator or a bias-corrected estimator might be more efficient alternatives to the GMM estimator (and possibly with better finite-sample properties as well). See for instance (with links to Stata packages):
    Regarding the Arellano-Bond test: Assuming those test results remain qualitatively the same once you have appropriately dealt with the too-many-instruments problem, this would indicate that the model is not dynamically complete. The remaining serial correlation in the error term would cause the instruments for the lagged dependent variable invalid. A remedy would be to add further lags of the dependent variable (and possibly the independent variables) as regressors to proxy for this serial correlation.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    First of all, your number of instruments (2728) is way too large relative to the number of groups. While you are using the asymptotically optimal set of instruments, in finite samples we need to ensure that the number of instruments is reasonably small to avoid biases and unreliable test results. Common strategies to reduce the number of instruments include curtailing (i.e. setting a maximum lag order for the instruments) and collapsing (i.e. turning GMM-style moment conditions that are separate for every time period into standard moment conditions that are summations over all time periods). Given that you have a relatively large number of time periods, curtailing definitely makes sense because far lags are unlikely to be strong instruments. Unless you have a very large number of groups (in the thousands) or a very small number of time periods, collapsing usually doesn't do any harm either.

    You implicitly assumed that all your independent variables (besides the lagged dependent variable) are strictly exogenous, i.e. uncorrelated with all future and past idiosyncratic errors. While leads are valid instruments in that case, this is hardly done in practice. You could further reduce the number of instruments by starting with lag 0 (the first argument of the lag() option). Moreover, it would be sufficient to simply instrument the strictly exogenous variables by themselves for the model with a mean-deviations transformation (the same as for the conventional fixed-effects estimator), i.e. iv(rs_gdd_s2 rs_hdd_s2 rs_precip_s2, model(mdev)).

    Given your relatively large number of time periods and the strict exogeneity assumption for the independent variables, you may not even need a GMM estimator at all, as the dynamic panel data bias of the conventional fixed-effects estimator might be sufficiently small. If you still worry about the bias, a maximum likelihood estimator or a bias-corrected estimator might be more efficient alternatives to the GMM estimator (and possibly with better finite-sample properties as well). See for instance (with links to Stata packages):
    Regarding the Arellano-Bond test: Assuming those test results remain qualitatively the same once you have appropriately dealt with the too-many-instruments problem, this would indicate that the model is not dynamically complete. The remaining serial correlation in the error term would cause the instruments for the lagged dependent variable invalid. A remedy would be to add further lags of the dependent variable (and possibly the independent variables) as regressors to proxy for this serial correlation.

    Leave a comment:


  • Jason Xiao
    replied
    Hi, I have a question regarding failing to satisify the higher order serial correlation test after difference GMM. I am not sure if it's caused by my xtdpdgmm command or somehting else. It's my first time doing dynamic panel estimation so please let me know if there is anything that I am missing.

    Code:
    Code:
    xtdpdgmm yield_mtha L.yield_mtha rs_gdd_s2 rs_hdd_s2 rs_precip_s2, ///
    model(diff) gmm(yield_mtha, lag(2 .)) gmm(rs_gdd_s2 rs_hdd_s2 rs_precip_s2, lag(. .))  
    
    
    note: standard errors may not be valid
    
    Generalized method of moments estimation
    
    Fitting full model:
    Step 1         f(b) =  3.3141562
    
    Group variable: code_muni                    Number of obs         =     13272
    Time variable: year                          Number of groups      =       474
    
    Moment conditions:     linear =    2728      Obs per group:    min =        28
                        nonlinear =       0                        avg =        28
                            total =    2728                        max =        28
    
    ------------------------------------------------------------------------------
      yield_mtha |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
      yield_mtha |
             L1. |   .4519066   .0089698    50.38   0.000     .4343261     .469487
                 |
       rs_gdd_s2 |   .1722979   .0518055     3.33   0.001      .070761    .2738348
       rs_hdd_s2 |  -1.925162    .496409    -3.88   0.000    -2.898106   -.9522184
    rs_precip_s2 |    .158965   .0150132    10.59   0.000     .1295397    .1883903
           _cons |   .1510277   .1209611     1.25   0.212    -.0860518    .3881071
    ------------------------------------------------------------------------------
    Instruments corresponding to the linear moment conditions:
     1, model(diff):
       1992:L2.yield_mtha 1993:L2.yield_mtha 1994:L2.yield_mtha 1995:L2.yield_mtha
       1996:L2.yield_mtha 1997:L2.yield_mtha 1998:L2.yield_mtha 1999:L2.yield_mtha
       2000:L2.yield_mtha 2001:L2.yield_mtha 2002:L2.yield_mtha 2003:L2.yield_mtha
       2004:L2.yield_mtha 2005:L2.yield_mtha 2006:L2.yield_mtha 2007:L2.yield_mtha
       2008:L2.yield_mtha 2009:L2.yield_mtha 2010:L2.yield_mtha 2011:L2.yield_mtha
    .......
    .......
    .......
    I am estimating a yield response model for a perennial crop. On the LHS yield_mtha is the yield for year t. On the RHS rs_gdd_s2 rs_hdd_s2 rs_precip_s2 areweather realizations during the period of interest for year t. Since it's a perennial crop, I am also interested in accounting for the "alternate bearing" effect, which means a big crop year is usually followed by a small crop year (In the full model I use, I also include a one-year lagged yield deviation. The purpose for this simplified model is to understand the command).

    Below is the Arellano-Bond test for absence of serial correlation in the first-differenced errors. I reject all of them, which is really confusing.

    Code:
    Code:
    estat serial, ar(1/3) 
    
    Arellano-Bond test for autocorrelation of the first-differenced residuals
    H0: no autocorrelation of order 1:     z =  -50.3692   Prob > |z|  =    0.0000
    H0: no autocorrelation of order 2:     z =   23.5852   Prob > |z|  =    0.0000
    H0: no autocorrelation of order 3:     z =  -13.4781   Prob > |z|  =    0.0000
    My question is (1) what does it indicate when I fail to satisfy the AB test for higher order autocorrelation? (2) What should I do when I encounter this situation?

    Thank you so much!

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Originally posted by Sebastian Kripfganz View Post
    There is a new update to version 2.3.4 on my website:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata/) replace
    This version fixes a bug that produced an incorrect list of instruments in the output footer and incorrectly labelled the instruments generated by the postestimation command predict, iv. This bug only bit if a static model was estimated with GMM-type instruments. If the model included a lag of the dependent or independent variables, then the problem did not occur. This bug did not affect any of the computations. It was just a matter of displaying the correct list of instruments.
    This is a bit embarrassing. It turns out that I did not entirely fix the bug with the instrument labels. In fact, for some specifications I even made it worse. A new update to version 2.3.5 is now available that hopefully this time really fixes this issue. As before, the bug only affected labels and the displaying of the instrument list. The estimation results themselves were not affected.

    Leave a comment:


  • Eliana Melo
    replied
    Thank you so much, professor Kripfganz, now is clear!

    Leave a comment:


  • haiyan lin
    replied
    Originally posted by Sebastian Kripfganz View Post
    Bootstrapping GMM estimates for dynamic panel models is not a straightforward task. After resampling the residuals, you would need to recursively reconstruct the data for the dependent variable using the estimate for the coefficient of the lagged dependent variable. The instruments used in the estimation also need to be updated accordingly. As far as I know, this cannot be readily done with the existing bootstrap functionality in Stata.
    Thanks, Sebastian! I searched for this issue but can't find a solution. Feel relieved after receiving your answer :D

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Eliana Melo
    In your specification, ob_agre subnormal inadpf log_pib log_iasc are treated as strictly exogenous, not predetermined. Also, you implicitly assume that they are uncorrelated with the unobserved "fixed effects", because they are used as instruments without the first-difference transformation in the level model. You might want to change your code as follows:
    Code:
    xtdpdgmm pntbt L.pntbt ob_agre subnormal inadpf log_decapu log_pib log_iasc log_tarid, ///
    gmmiv(L.pntbt, lag(2 2) m(d) collapse) ///
    gmmiv(L.pntbt, lag(1 1) m(l) diff collapse) ///
    gmmiv(log_decapu, lag(2 2) m(d) collapse) ///
    gmmiv(log_decapu, lag(1 1) m(l) diff collapse) ///
    gmmiv(log_tarid, lag(2 2) m(d) collapse) ///
    gmmiv(log_tarid, lag(1 1) m(l) diff collapse) ///
    gmmiv(ob_agre subnormal inadpf log_pib log_iasc, lag(1 1) m(d) collapse) ///
    gmmiv(ob_agre subnormal inadpf log_pib log_iasc, lag(1 1) m(l) diff collapse) ///
    twostep vce(r) overid
    For the level model, it does not make a difference whether a variable is treated as endogenous or predetermined.

    One possibility to deal with the differences in the overidentification tests would be to consider an iterated GMM estimator (option igmm instead of twostep), although this could aggravate any problems if there is a weak identification problem. I would suggest to check for weak identification with the underid command (available from SSC and explained in my presentation).

    For correct specification, you want the Difference-in-Hansen tests to not reject the null hypothesis. But for this test to be valid, it is initially required that the test in the "Excluding" column also does not reject the null hypothesis. In your case, none of the tests gives rise to an obvious concern, but the p-values are also not large enough to be entirely comfortable.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Originally posted by haiyan lin View Post
    Is there a good way to get bootstrapped confidence intervals after GMM estimation?
    Bootstrapping GMM estimates for dynamic panel models is not a straightforward task. After resampling the residuals, you would need to recursively reconstruct the data for the dependent variable using the estimate for the coefficient of the lagged dependent variable. The instruments used in the estimation also need to be updated accordingly. As far as I know, this cannot be readily done with the existing bootstrap functionality in Stata.

    Leave a comment:


  • Eliana Melo
    replied
    Dear all,


    I have doubts about how to embed predetermined variables in the system GMM. I read Professor Sebastian's presentation and I'm not sure if I'm doing it right. My dependent variable is the percentage of Non-Technical Losses in distribution of electricity (pntbt) or electricity theft. I suspect the endogeneity of two explanatory variable: duration of interruptions in electrical distribution (log_dec)) and electricity price (log_tarid). The other variables are predetermined (I have no evidence to think they are strictly exogenous).

    Code:
    Code:
    xtdpdgmm pntbt L.pntbt ob_agre subnormal inadpf log_decapu log_pib log_iasc log_tarid, ///
    gmmiv(L.pntbt, lag(2 2) m(d) collapse) ///
    gmmiv(L.pntbt, lag(2 2) m(l) diff collapse) ///
    gmmiv(log_decapu, lag(2 2) m(d) collapse) ///
    gmmiv(log_decapu, lag(3 3) m(l) diff collapse) ///
    gmmiv(log_tarid, lag(2 2) m(d) collapse) ///
    gmmiv(log_tarid, lag(2 2) m(l) diff collapse) ///
    gmmiv(ob_agre subnormal inadpf log_pib log_iasc, lag(0 1) m(d) collapse) ///
    gmmiv(ob_agre subnormal inadpf log_pib log_iasc, lag(0 1) m(l) collapse) ///
    twostep vce(r) overid

    Code:
    Group variable: id                           Number of obs         =       721
    Time variable: ano                           Number of groups      =        61
    
    Moment conditions:     linear =      27      Obs per group:    min =         8
                        nonlinear =       0                        avg =  11.81967
                            total =      27                        max =        12
    
                                        (Std. Err. adjusted for 61 clusters in id)
    ------------------------------------------------------------------------------
                 |              WC-Robust
           pntbt |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
           pntbt |
             L1. |   .8545485   .0980974     8.71   0.000     .6622812    1.046816
                 |
         ob_agre |   .0000241   .0002129     0.11   0.910    -.0003931    .0004414
       subnormal |   .2545236   .1650646     1.54   0.123    -.0689971    .5780442
          inadpf |   .0712857   .2658358     0.27   0.789    -.4497428    .5923142
      log_decapu |   .0202332   .0179756     1.13   0.260    -.0149983    .0554647
         log_pib |   .0086632    .008496     1.02   0.308    -.0079886     .025315
        log_iasc |  -.0108519   .0179764    -0.60   0.546    -.0460851    .0243813
       log_tarid |   .0352162   .0273752     1.29   0.198    -.0184383    .0888706
           _cons |  -.2797855   .2705651    -1.03   0.301    -.8100833    .2505124
    ------------------------------------------------------------------------------
    Code:
    estat serial
    estat overid
    estat overid, difference
    Code:
    estat serial
    
    Arellano-Bond test for autocorrelation of the first-differenced residuals
    H0: no autocorrelation of order 1:     z =   -3.2453   Prob > |z|  =    0.0012
    H0: no autocorrelation of order 2:     z =    1.5184   Prob > |z|  =    0.1289
    
    . estat overid
    
    Sargan-Hansen test of the overidentifying restrictions
    H0: overidentifying restrictions are valid
    
    2-step moment functions, 2-step weighting matrix       chi2(18)    =   24.4723
                                                           Prob > chi2 =    0.1402
    
    2-step moment functions, 3-step weighting matrix       chi2(18)    =   30.4614
                                                           Prob > chi2 =    0.0332
    
    . estat overid, difference
    
    Sargan-Hansen (difference) test of the overidentifying restrictions
    H0: (additional) overidentifying restrictions are valid
    
    2-step weighting matrix from full model
    
                      | Excluding                   | Difference                  
    Moment conditions |       chi2     df         p |        chi2     df         p
    ------------------+-----------------------------+-----------------------------
       1, model(diff) |    24.4438     17    0.1079 |      0.0286      1    0.8658
      2, model(level) |    24.4721     17    0.1072 |      0.0002      1    0.9884
       3, model(diff) |    22.0325     17    0.1835 |      2.4398      1    0.1183
      4, model(level) |    24.4103     17    0.1087 |      0.0620      1    0.8033
       5, model(diff) |    22.2540     17    0.1751 |      2.2183      1    0.1364
      6, model(level) |    24.4709     17    0.1072 |      0.0014      1    0.9700
       7, model(diff) |    15.6926      8    0.0470 |      8.7797     10    0.5531
      8, model(level) |     8.6499      8    0.3727 |     15.8224     10    0.1048
          model(diff) |     8.0584      5    0.1530 |     16.4139     13    0.2275
         model(level) |     8.0584      5    0.1530 |     16.4139     13    0.2275

    In a last post, prof. Kripfganz mentioned that
    It is usually sufficient to consider the overidentification test with the 2-step weighting matrix. The two tests are asymptotically equivalent. If they differ substantially, then this would be an indication that the weighting matrix is poorly estimated.
    . In this case, the two overidentification test are differents, what would I do in that case? When is consider that the 2-step and 3-step weighting matrix are substantially different?

    Also, I am not sure of interpreting right the Sargan-Hansen difference test. In a general way, the (Difference-in-) Hansen tests do not reject the null hypothesis, then the instruments in all equations are valid. Or is it to be concerned that the in some equations the p values are relatively small?


    Thank you so much for any comment!!!
    Last edited by Eliana Melo; 04 Jul 2021, 12:02.

    Leave a comment:

Working...
X