Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dynamic panel data, with small N and large T

    I have a panel data with N=17 and T=46. The model has a dynamic specification as it includes a lagged dependent variable. It looks something like the equation below:

    Code:
    Yit=ayit-1+b1D1it+b2D2it+b3xit+eit

    Where y is my dependent variable, x a vector of covariates and Ds are dummy variables.

    A dynamic model is usually estimated using the GMM method, however as my N is smaller than my T in this case it is not feasible.
    Some papers talk about (1) Running a separate regression for each group and averaging the coe‑cient over groups; (2) Combine the data dening a common slope, allowing for xed or random intercepts and estimating pooled regressions (Mairesse & Griliches 1988); (3) Take the data average over group and estimate the aggregate time series regressions (Pesaran, Pierse & Kumar 1989, Lee, Pesaran & Pierse 1990) and (4) Averaging the data over time and estimating cross section regression on group means (Barro 1991).
    I have also gone through some earlier posts, but still lack clarity. My issue is
    What method can be used to estimate a dynamic model with small N and large T.
    Is xtivreg appropriate in this situation?

  • #2
    Your T seems to be large enough such that the short-T dynamic panel data bias might become negligible. In that case, you could indeed just use xtivreg.
    https://www.kripfganz.de/stata/

    Comment


    • #3
      Dear sirthank you for your reply.



      Given my T of 46 years can I estimate it with a fixed effects model (perhaps I am wrong but that is what your suggestion is?) (I did mention xtivreg however I do not have variables as instruments)

      Also I find that several papers use bias corrected LSDV estimators (such as Hausman–Taylor estimator (xthtaylor) and Kiviet and Bruno (xtlsdv). Would it be better to use bias corrected LSDV estimators given my data structure?


      Comment


      • #4
        If all your other variables are exogenous, you can use the fixed-effects estimator. It might be reasonable indeed to double-check the results with a bias-corrected estimator, e.g. xtlsdv. If T is large enough, then the results should not substantially differ.

        xthtaylor is not a bias-corrected estimator. It is an instrumental-variables estimator.
        https://www.kripfganz.de/stata/

        Comment


        • #5
          Thanks for the suggestion. Yes the rest of my variables are exogenous.

          Comparing seems like a good idea. I will go ahead and compares the results of xtreg with those from xtlsdv
          (Unfortunately I compared xtreg with xthtaylor and got estimates that are pretty similar)


          A small aside though, can xtlsdv be used in the case where T is >30?

          Comment


          • #6
            Originally posted by Shailaja Tiwari View Post
            A small aside though, can xtlsdv be used in the case where T is >30?
            In theory: yes. (I have not tried it in practice.)
            https://www.kripfganz.de/stata/

            Comment


            • #7
              Hi sir
              Thanks once again

              In theory: yes. (I have not tried it in practice.)​​​​​​
              Following your answer I have been reading up on some empirical studies. Using this method does seem to be the way most papers deal with endogeneity.

              ​​If T is large enough, then the results should not substantially differ.​​​​​​
              Indeed My estimates are quite similar like you mention

              I just had one related thought. Can the Hausman Taylor estimator also be applied when t exceeds n?

              Comment


              • #8
                Originally posted by Shailaja Tiwari View Post
                I just had one related thought. Can the Hausman Taylor estimator also be applied when t exceeds n?
                The Hausman-Taylor estimator makes very different assumptions. But yes, it can be applied when T is large.
                https://www.kripfganz.de/stata/

                Comment


                • #9
                  Originally posted by Sebastian Kripfganz View Post
                  Your T seems to be large enough such that the short-T dynamic panel data bias might become negligible. In that case, you could indeed just use xtivreg.
                  Dear Sir,
                  What are the criteria for determining whether T is large or small? If my T=30, under which term will it come? Also, sir, Could you please provide a reference for this? Thank you

                  Comment


                  • #10
                    Originally posted by Shailaja Tiwari View Post
                    I have a panel data with N=17 and T=46. The model has a dynamic specification as it includes a lagged dependent variable. It looks something like the equation below:

                    Code:
                    Yit=ayit-1+b1D1it+b2D2it+b3xit+eit

                    Where y is my dependent variable, x a vector of covariates and Ds are dummy variables.

                    A dynamic model is usually estimated using the GMM method, however as my N is smaller than my T in this case it is not feasible.
                    Some papers talk about (1) Running a separate regression for each group and averaging the coe‑cient over groups; (2) Combine the data dening a common slope, allowing for xed or random intercepts and estimating pooled regressions (Mairesse & Griliches 1988); (3) Take the data average over group and estimate the aggregate time series regressions (Pesaran, Pierse & Kumar 1989, Lee, Pesaran & Pierse 1990) and (4) Averaging the data over time and estimating cross section regression on group means (Barro 1991).
                    I have also gone through some earlier posts, but still lack clarity. My issue is
                    What method can be used to estimate a dynamic model with small N and large T.
                    Is xtivreg appropriate in this situation?
                    Hi Madam,
                    Have you solved your problem? I went through the entire conversation of you and some others that are listed in "STATALIST." Still, I do have some confusion. Let me first brief my case: I have panel data with N = 4 and T = 30. The model is dynamic, meaning it takes the lag value of the dependent variable as an independent variable. In your case, too, you have mentioned the same problem. Madam, Please guide me on how I can proceed with this dataset.

                    Comment


                    • #11
                      While T is not particularly large in your case, it is certainly large in relation to N. Therefore, "large-T, fixed-N" procedures will be more accurate, although the asymptotic distributions might still not be a very good approximation for the finite-sample distributions.

                      There are no fixed thresholds to decide about when T is "large". It not only depends on its relation to N, but also on the unknown characteristics of the data-generating process; e.g., whether or not there is high persistence in the dependent variable. The more persistent the dependent variable is, the larger T should be for the asymptotic distribution to be a reasonable approximation. The catch here is that you need to choose an estimator first to learn something about the persistence. If there is high persistence and your T is not large enough, you might get biased results falsely indicating low persistence. So, you might erroneously conclude that T is large enough when it is not. Unfortunately, there is no general solution to this.
                      https://www.kripfganz.de/stata/

                      Comment

                      Working...
                      X