Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zainab Mariam
    replied
    Dear Professor Sebastian,

    Many thanks for your reply.

    1) To implement the Difference GMM estimator using your command ‘xtdpdgmm’, do I have to mention the option model(diff) only once (specifically, after mentioning the model’s variables)?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    Or do I have to mention the option model(diff) in each gmm brackets?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, collapse gmm(y, lag(2 4) model(diff)) gmm(L.x1, lag(2 4) model(diff)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3) model(diff)) gmm(x10, lag(0 0) model(diff)) ///
    > nocons two vce(r)

    2) To implement the Difference GMM estimator using your command ‘xtdpdgmm’, do I have to mention the option collapse only once (specifically, before the first gmm brackets)?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    Or do I have to mention the option collapse in each gmm brackets?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) gmm(y, lag(2 4) collapse) gmm(L.x1, lag(2 4) collapse) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3) collapse) gmm(x10, lag(0 0) collapse) ///
    > nocons two vce(r)


    3) To implement the Difference GMM estimator using your command ‘xtdpdgmm’, can I include dummies (such as industry, country and year dummies) in my regression model? If so, what do I have to include in the code?

    Your cooperation is highly appreciated.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    All three codes appear to be equivalent.

    gmm() is just an abbreviation of gmmiv(). iv() is a collapsed version of gmmiv(). The help file states:
    gmmiv(varlist, lagrange(#_1 #_2) collapse) is equivalent to iv(varlist, lagrange(#_1 #_2)).

    Leave a comment:


  • Zainab Mariam
    replied
    Dear Professor Sebastian,

    Thank you for your response.

    1) To implement the Difference GMM estimator using your command ‘xtdpdgmm’, I wonder whether the following codes are different or equivalent.

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(1 3)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm y L.y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm y L.(y x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    Where:
    y is the dependent variable;
    L.y is the lagged dependent variable as a regressor;
    L.x1 is the independent variable;
    L.x2, L.x3, L.x4, L.x5, L.x6, L.x7, L.x8, L.x9, x10 are the control variables.

    2) Is there any difference between the options: gmmiv( ), gmm( ), and iv( )?

    I do appreciate your cooperation.
    Last edited by Zainab Mariam; 15 Aug 2022, 08:06.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Both codes are equivalent. You may choose whichever you find more intuitive.

    Leave a comment:


  • Zainab Mariam
    replied
    Dear Professor Sebastian,

    Thank you for your reply.

    In the first gmm brackets, do I have to put the dependent variable y itself?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    or do I have to put the lagged dependent variable L.y (the regressor)?

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(1 3)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    Your cooperation is highly appreciated.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    One would normally consider the lagged dependent variable as predetermined, not endogenous. In this case, all codes would be "correct", although the first two codes would be preferable because they already use the second lag of y (i.e. the first lag of L.y) as an instrument, which is stronger than only the second lag of L.y (i.e. the third lag of y). As far as I can see, the first two codes are equivalent.

    With the latest version of the xtdpdgmm package, you can use the xtdpdgmmfe command to specify predetermined and endogenous variables. It will also show you the appropriate xtdpdgmm code; see my post #450 above.

    As "contemporaneous value of the lagged control variable" L.x5, I would consider L.x5. x5 is the "contemporaneous value of the control variable" x5.

    Leave a comment:


  • Zainab Mariam
    replied
    Dear Professor Sebastian,

    I am using Stata 14. The data type of my research is panel data (unbalanced), the time period is 22 years, 5084 firms. My model includes 10 explanatory variables (L.x1, L.x2, L.x3, L.x4, L.x5, L.x6, L.x7, L.x8, L.x9, x10). The dependent variable y of my research is a limited dependent variable and truncated between zero and one.

    I have a dynamic model (my regression model includes the lagged dependent variable L.y as a regressor).

    1) I will apply the Difference GMM estimator using your command ‘xtdpdgmm’. I will consider the lagged dependent variable L.y as endogenous, the independent variable L.x1 as endogenous, while the variables L.x2, L.x3, L.x4, L.x5, L.x6, L.x7, L.x8, L.x9 as predetermined, and the variable x10 (firm age) as exogenous. Thus, my first question is: which of the following codes is/are correct and I can use to implement the Difference GMM estimator?


    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(y L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(. .)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(1 3)) gmm(x10, lag(0 2)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(2 4)) gmm(L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(0 2)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(0 2)) gmm(x10, lag(0 0)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y L.x1, lag(2 4)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(0 2)) gmm(x10, lag(. .)) ///
    > nocons two vce(r)

    . xtdpdgmm L.(0/1) y L.(x1 x2 x3 x4 x5 x6 x7 x8 x9) x10, model(diff) collapse gmm(L.y, lag(2 .)) gmm(L.x1, lag(2 .)) gmm(L.x2 L.x3 L.x4 L.x5 L.x6 L.x7 L.x8 L.x9, lag(0 .)) gmm(x10, lag(. .)) ///
    > nocons two vce(r)


    2) If none of the previous codes is correct, what is the correct code I have to use in order to implement the Difference GMM estimator?

    3) What is the contemporaneous term of the lagged control variable? For instance, is x5 (i.e., the variable x5 at time 0) the contemporaneous value of the lagged control variable L.x5? or is L.x5 (i.e., the variable x5 at time t minus 1) the contemporaneous value of the lagged control variable L.x5?

    Sorry for the long message.
    Thank you in advance.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Thanks to Kit Baum, the latest version 2.6.2 of xtdpdgmm with all the updates from the recent weeks is now also available on SSC.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    Nicu Sprincean
    This happens if the previous version of the command is still in memory when you perform the update. There are two solutions:
    (a) restart Stata, or (b) run the command clear mata.

    Joseph L. Staats
    1. If you were using all available lags for the FOD model, then additional lags (beyond the first lag) of instruments in levels are redundant. (Essentially, you would be introducing perfect collinearity among the instruments.) Technically, this is no longer the case when you restrict the maximum lag order for the FOD model. However, in the empirical practice there is hardly ever more than 1 lag used for the model in levels.
    2. It is quite common that you observe opposite signs for the coefficients of the contemporaneous and the lagged variables. This often indicates that a large initial effect is dampened (or sometimes entirely counteracted) in the following period. It is perfectly fine to have effects going in both directions. This is the very nature of simultaneity as a form of endogeneity. While you have constructed models with effects that go in opposite directions, you achieved identification using lags of the respective regressors as instruments.

    Leave a comment:


  • Nicu Sprincean
    replied
    Sebastian Kripfganz,
    There seems to be an error when trying to run the Jochmans portmanteau test: Here is the error:
    Code:
      xtdpdgmm_serial_pm():  3021  class compiled at different times
                     <istmt>:     -  function returned error
    r(3021);

    Leave a comment:


  • Joseph L. Staats
    replied
    As always, thanks for your help. I have two additional questions that relate to the answers you provided for my two previous questions.

    1. I would like clarification of this part of your answer: "For the level model in a system GMM approach, it is quite uncommon to use any higher-order lags." Do you mean by this that when structuring the instrument lags, the level lags should remain fixed at a low maximum (e.g., 1) no matter what the maximum instrument lag might be for the fod model (e.g., 2)? Or do you mean instead that the maximum instrument lag for the level model should normally be the same as provided in the fod model?

    2. For my reverse-direction model, I have been using one lag of the independent variable of interest as a regressor. I obtain very substantial and statistically significant coefficients for each of the contemporaneous and lagged variables, but the coefficient for the former is positive and for the latter negative. I don't obtain anything satisfactory when I use a second lag. I would have confidence in my model (it passes overidentification and underidentification tests) were it not for the fact that I also have constructed a model where the effects go in the opposite direction. Other than convincing a reader or reviewer that the reverse direction is supported by theory, is there anything else I need to worry about in constructing models that work in opposite directions?

    Leave a comment:


  • Sebastian Kripfganz
    replied
    New week, new update, new feature: Version 2.6.1 of xtdpdgmm is shipped with the new postestimation command estat serialpm, which computes the Jochmans (2020) portmanteau test for absence of serial correlation in the idiosyncratic error component. Unlike the Arellano and Bond (1991) test - implemented in estat serial - the portmanteau test does not test for autocorrelation of the first-differenced residuals at a specific order (e.g. second order), but jointly tests for the absence of autocorrelation of the idiosyncratic level errors at any order. (Technical comment: Because the portmanteau test involves level residuals, it is not invariant to the exclusion of an intercept.) The Jochmans portmanteau test can be a more powerful alternative to the Arellano-Bond test, especially if T is relatively small or if there is very strong serial correlation (close to a unit root).

    In this new version, the option ar() of estat serial has been renamed as order(). The former name was borrowed from xtdpd and xtabond2, but it is actually confusing because it is not actually a test for autoregressive residuals. For backward compatibility, the ar() option continues to work but it is no longer documented.

    To install the latest version of the command, type the following in Stata's command window:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata) replace
    References:
    • Arellano, M., and S. R. Bond (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies 58, 277-297.
    • Jochmans, K. (2020). Testing for correlation in error-component models. Journal of Applied Econometrics 35, 860-878.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    1. From a theoretical perspective, as long as your instruments are valid, your estimator will be consistent. However, none of the statistics you mentioned (overidentification, underidentification, AIC/BIC) are qualified procedures to select a subset of instruments, assuming all instruments are valid, for the purpose of improving the fit. These tests have no asymptotic power for discriminating among the models you are comparing with each other. Nevertheless, limiting the maximum lag order can be beneficial to reduce weak-instruments problems. Appropriate instrument selection procedures for this purpose are unfortunately not implemented in Stata. From an applied perspective, a reviewer or reader might wonder about the rationale behind your choices. Picking models in the way you have done could be interpreted as cherry picking or data mining unless you have a good justification, say, that higher-order lags of x1 are relatively weaker than higher-order lags of x2. For the level model in a system GMM approach, it is quite uncommon to use any higher-order lags. It is not necessarily wrong, but it can be hard to justify, especially when you are not consistent with your choice across variables and/or specifications.

    2. Instead of starting with higher-order lags (which might be relatively weak instruments), my suggestion would be to include lags of that variable as additional regressors. This way, you can check whether there are contemporaneous and/or lagged effects.

    Leave a comment:


  • Joseph L. Staats
    replied
    Sebastian,

    I am using xtdpdgmm for system GMM models having ten independent and control variables. I have the following questions:

    1. I find that I can improve model fit in terms of overidentification, underidentification, and AIC and BIC if I sometimes use: (a) different instrument lag ranges as between the variables (e.g., x1 will use lag(1 1) and x2 will use lag(1 3); and (b) lag ranges as between the fod equation and the level equation for a single variable (e.g., in the fod equation, x4 will be lag(1 2) and in the level equation lag(1 3). Is there anything improper in doing this? I hadn't thought there was until reading your response to a question in #395 about "cherry picking" (which I realize related to comparing system GMM results with difference GMM results, which is different from my question).

    2. In my project, I am proposing that my dependent variable not only is acted upon by my main independent variable of interest but also acts in the opposite direction on the independent variable. There is strong theoretical support for the former, but the theory for the latter is novel, but one which I believe I can support. In using system GMM for the latter, is there anything special I can do to make a stronger argument that the reverse-direction effects are not spurious? One thing I have tried is starting the instrument lags for the formerly dependent variable, now independent variable, at lag 3 rather than lag 1 (this produces good results, as does starting at lag 4).

    Thanks.

    Leave a comment:


  • Sebastian Kripfganz
    replied
    This week's update of the xtdpdgmm package to version 2.6.0 brings a new command, xtdpdgmmfe, which serves as a wrapper for xtdpdgmm with simplified syntax. Instead of specifying all the instruments manually, this wrapper command does it for you based on a set of assumptions you input.
    • With option lags(), you can specify the autoregressive order of the model. By default, a dynamic model with 1 lag of the dependent variable is estimated.
    • With options exogenous(), predetermined(), and endogenous(), you need to classify the regressors accordingly.
    • Dummies for time effects can be added in the familiar way with option teffects.
    • With the familiar option collapse and the new option curtail(), you can easily reduce the number of instruments using collapsing or curtailing. The latter sets a maximum lag order for the instruments.
    • Option orthogonal allows you to request orthogonal deviations instead of first differences. (Note: For strictly exogenous variables, this will typically add instruments for the model in deviations from within-group means and for the model in forward-orthogonal deviations, while for predetermined and endogenous variables instruments are only available for the model in forward-orthogonal deviations. Also importantly, orthogonal automatically reduces the maximum lag order specified with option curtail() by 1 lag to ensure that the number of instruments stays the same with and without orthogonal deviations. The reason is that with orthogonal deviations, the minimum lag that is valid as an instrument is lower by 1 as well.)
    • With option serial(), you can allow for serially correlated idiosyncratic errors up to the specified order. This will affect the minimum lag order of instruments for predetermined and endogenous variables, and possibly the availability of nonlinear moment conditions. By default, serially uncorrelated idiosyncratic errors are assumed.
    • With option iid, you can add a homoskedasticity assumption in addition to serially uncorrelated idiosyncratic errors. This might enable additional linear or nonlinear moment conditions.
    • Option initdev is less intuitive. It assumes that the deviations of the initial observations from their long-run means are uncorrelated with the idiosyncratic errors. This relaxes the slightly stronger default assumption that initial observations and group-specific effects (not their deviations) must be each uncorrelated with the idiosyncratic errors. Under the default assumption, lagged levels can be used as instruments for the first-differenced/forward-orthogonally transformed model. Under the initdev assumption, only lagged first differences or backward-orthogonally transformed variables can be used as instruments. It also effects the type of nonlinear moment conditions that might be available.
    • With option stationary, additional first-differenced instruments become available for the level model. Nonlinear moment conditions become redundant.
    • If nonlinear moment conditions are undesired irrespective of the assumptions, option nonl can be specified.
    • In contrast to xtdpdgmm, the default estimator with xtdpdgmmfe is the iterated GMM estimator (igmm). Alternatively, the onestep, twostep, or continuously-updating GMM estimator (cugmm) can be requested.
    • By default, xtdpdgmmfe displays the respective xtdpdgmm command line used to estimate the model. This allows you to fine-tune your estimator using xtdpdgmm, which offers additional specialist options, and to see which options are implied by your chosen assumptions. If this feature is undesired, display of the command line can be prevented with option nocmdline.
    • Because the model is actually estimated by xtdpdgmm, the usual postestimation commands are available.
    See the help file for details:
    Code:
    help xtdpdgmmfe
    Here are some examples of conventional estimators, assuming that the regressors are predetermined. The examples also show how the xtdpdgmmfe syntax translates into the xtdpdgmm syntax:
    Code:
    . webuse abdata
    1. Anderson and Hsiao (1981) "difference IV" estimators with lagged levels or lagged differences as instruments:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) collapse curtail(1) nonl teffects onestep
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .)) collapse curtail(1) teffects nolevel onestep
    
    . xtdpdgmmfe n w k, predetermined(w k) initdev collapse curtail(1) nonl teffects onestep
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .) difference) collapse curtail(1) teffects nolevel onestep
    2. Arellano and Bond (1991) one-step "difference GMM" estimator with curtailed instruments:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) curtail(3) nonl teffects onestep
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .)) curtail(3) teffects nolevel onestep
    3. Arellano and Bover (1995) one-step "forward-orthogonal GMM" estimator with curtailed instruments:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) curtail(3) orthogonal nonl teffects onestep
    
      xtdpdgmm L(0/1).n w k , model(fodev) gmmiv(L.n w k, lagrange(0 .)) curtail(2) teffects nolevel onestep
    4. Hayakawa, Qi, and Breitung (2019) "backward/forward-orthogonal IV" estimator:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) initdev collapse curtail(1) orthogonal nonl teffects onestep
    
      xtdpdgmm L(0/1).n w k , model(fodev) gmmiv(L.n w k, lagrange(0 .) bodev) collapse curtail(0) teffects nolevel onestep
    5. Blundell and Bond (1998) two-step "system GMM" estimator with curtailed/collapsed instruments and doubly-corrected robust standard errors:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) stationary collapse curtail(3) teffects twostep vce(robust, dc)
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .)) gmmiv(L.n w k, lagrange(0 0) difference model(level)) collapse curtail(3) teffects twostep vce(robust, dc)
    6. Ahn and Schmidt (1995) two-step GMM estimator (with curtailed instruments and doubly-corrected robust standard errors) using nonlinear moment conditions valid under serially uncorrelated idiosyncratic errors without or with homoskedasticity:
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) curtail(3) teffects twostep vce(robust, dc)
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .)) nl(noserial) curtail(3) teffects twostep vce(robust, dc)
    
    . xtdpdgmmfe n w k, predetermined(w k) iid curtail(3) teffects twostep vce(robust, dc)
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .)) nl(iid) curtail(3) teffects twostep vce(robust, dc)
    7. Chudik and Pesaran (2022) iterated GMM estimator (with collapsed instruments, centered weighting matrix, and doubly-corrected robust standard errors) using nonlinear moment conditions valid under serially uncorrelated idiosyncratic errors (and no endogenous regressors):
    Code:
    . xtdpdgmmfe n w k, predetermined(w k) initdev collapse teffects igmm vce(robust, dc) center
    
      xtdpdgmm L(0/1).n w k , model(difference) gmmiv(L.n w k, lagrange(1 .) difference) nl(predetermined) collapse teffects nolevel igmm vce(robust, dc) center
    8. Finally, a replication of a fixed-effects estimator in a static model (necessarily with strictly exogenous regressors):
    Code:
    . xtdpdgmmfe n w k, lags(0) exogenous(w k) collapse curtail(1) orthogonal teffects onestep norescale
    
      xtdpdgmm n w k , model(mdev) gmmiv(w k, lagrange(0 0)) collapse curtail(0) teffects nolevel onestep norescale
    
    . xtreg n w k i.year, fe
    To install the latest version of the package, type the following in Stata's command window:
    Code:
    net install xtdpdgmm, from(http://www.kripfganz.de/stata/) replace
    Suggested citation if you find this package useful in your work:
    References:
    • Ahn, S. C., and P. Schmidt (1995). Efficient estimation of models for dynamic panel data. Journal of Econometrics 68, 5-27.
    • Anderson, T. W., and C. Hsiao (1981). Estimation of dynamic models with error components. Journal of the American Statistical Association 76, 598-606.
    • Arellano, M., and S. R. Bond (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies 58, 277-297.
    • Arellano, M., and O. Bover (1995). Another look at the instrumental variable estimation of error-components models. Journal of Econometrics 68, 29-51.
    • Blundell, R., and S. R. Bond (1998). Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115-143.
    • Chudik, A., and M. H. Pesaran (2022). An augmented Anderson-Hsiao estimator for dynamic short-T panels. Econometric Reviews 41, 416-447.
    • Hayakawa, K., M. Qi, and J. Breitung (2019). Double filter instrumental variable estimation of panel data models with weakly exogenous variables. Econometric Reviews 38, 1055-1088.

    Leave a comment:

Working...
X