Hello together,
I am currently working on my thesis and was wondering if my current use of a Two-Step System-GMM is useful at all or if a plain-vanilla FE Regression will do the job.
In Roodman (2009) it is often mentioned that "xtabond2" should be used for small T and large N datasets. I am wondering when a paneldata sets is considered to have a too large T and too small N?
My current datasets consists of over 170.000 observation from 18.000 companies over 30 years. As one can see, its an (heavily) unbalanced dataset.
You can see my results attached if this is of any use.
Could you give me an indication if a Two-Step GMM is of any use in this setting or whether a plain FE regression can also do the job?
Have a nice day! :-)
I am currently working on my thesis and was wondering if my current use of a Two-Step System-GMM is useful at all or if a plain-vanilla FE Regression will do the job.
In Roodman (2009) it is often mentioned that "xtabond2" should be used for small T and large N datasets. I am wondering when a paneldata sets is considered to have a too large T and too small N?
My current datasets consists of over 170.000 observation from 18.000 companies over 30 years. As one can see, its an (heavily) unbalanced dataset.
You can see my results attached if this is of any use.
Code:
Dynamic panel-data estimation, two-step system GMM ------------------------------------------------------------------------------ Group variable: gvkey Number of obs = 111060 Time variable : year Number of groups = 18281 Number of instruments = 448 Obs per group: min = 1 F(14, 18280) = 14561.00 avg = 6.08 Prob > F = 0.000 max = 29 ------------------------------------------------------------------------------------- | Corrected COE | Coefficient std. err. t P>|t| [95% conf. interval] --------------------+---------------------------------------------------------------- COE | L1. | .0746159 .0103966 7.18 0.000 .0542377 .0949942 | numest_log | .0048291 .0004184 11.54 0.000 .0040091 .0056492 eps_var_log2 | .0088655 .0003942 22.49 0.000 .0080929 .0096382 log_bmr | .0166525 .0004454 37.39 0.000 .0157795 .0175254 mv_log | -.0102536 .0002588 -39.62 0.000 -.010761 -.0097463 BETA | .0036052 .0002894 12.46 0.000 .003038 .0041724 financial_dummy | .0051529 .0009553 5.39 0.000 .0032805 .0070253 health_dummy | -.0046829 .0009418 -4.97 0.000 -.0065289 -.0028369 industrial_dummy | .0024668 .0008792 2.81 0.005 .0007435 .00419 it_tel_dummy | -.0008938 .0009191 -0.97 0.331 -.0026952 .0009077 oil_gas_dummy | .0156918 .0018826 8.34 0.000 .0120016 .0193819 materials_dummy | .0122339 .0012172 10.05 0.000 .009848 .0146197 communication_dummy | .0034575 .0014513 2.38 0.017 .0006128 .0063021 utility_dummy | .0033903 .0014856 2.28 0.022 .0004784 .0063023 _cons | .1972499 .0026227 75.21 0.000 .1921092 .2023907 ------------------------------------------------------------------------------------- Instruments for orthogonal deviations equation GMM-type (missing=0, separate instruments for each period unless collapsed) L(1/29).L.COE Instruments for levels equation Standard numest_log eps_var_log2 log_bmr mv_log BETA financial_dummy health_dummy industrial_dummy it_tel_dummy oil_gas_dummy materials_dummy communication_dummy utility_dummy _cons GMM-type (missing=0, separate instruments for each period unless collapsed) D.L.COE ------------------------------------------------------------------------------ Arellano-Bond test for AR(1) in first differences: z = -24.06 Pr > z = 0.000 Arellano-Bond test for AR(2) in first differences: z = 0.58 Pr > z = 0.565 ------------------------------------------------------------------------------ Sargan test of overid. restrictions: chi2(433) =3464.68 Prob > chi2 = 0.000 (Not robust, but not weakened by many instruments.) Hansen test of overid. restrictions: chi2(433) =1785.01 Prob > chi2 = 0.000 (Robust, but weakened by many instruments.)
Have a nice day! :-)
Comment