Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Qurat:
    in long T, small N panel datasets you can add the -i.panelid- and a squared term for -timevar- to investigate potential turning point, as in the following toy-example:
    Code:
    . use http://www.stata-press.com/data/r15/invest2
    
    . xtgls invest market stock i.company c.time##c.time
    
    Cross-sectional time-series FGLS regression
    
    Coefficients:  generalized least squares
    Panels:        homoskedastic
    Correlation:   no autocorrelation
    
    Estimated covariances      =         1          Number of obs     =        100
    Estimated autocorrelations =         0          Number of groups  =          5
    Estimated coefficients     =         9          Time periods      =         20
                                                    Wald chi2(8)      =    1505.04
    Log likelihood             =  -561.653          Prob > chi2       =     0.0000
    
    -------------------------------------------------------------------------------
           invest |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    --------------+----------------------------------------------------------------
           market |   .1067342   .0154291     6.92   0.000     .0764938    .1369745
            stock |    .360213   .0326232    11.04   0.000     .2962727    .4241533
                  |
          company |
               2  |   56.58426   58.67971     0.96   0.335    -58.42586    171.5944
               3  |  -160.9345   41.61103    -3.87   0.000    -242.4907   -79.37841
               4  |   28.55782   59.07646     0.48   0.629    -87.22991    144.3455
               5  |   175.1112    41.3142     4.24   0.000      94.1369    256.0856
                  |
             time |     1.1721   4.913048     0.24   0.811    -8.457298     10.8015
                  |
    c.time#c.time |  -.0989248   .2392159    -0.41   0.679    -.5677793    .3699297
                  |
            _cons |   -86.2353   70.45214    -1.22   0.221     -224.319    51.84836
    -------------------------------------------------------------------------------
    
    .
    As the outcome does not support the evidence for turning point, the linear term (or -i.year-) is probably the way to go:

    Code:
    . xtgls invest market stock i.company i.time
    
    Cross-sectional time-series FGLS regression
    
    Coefficients:  generalized least squares
    Panels:        homoskedastic
    Correlation:   no autocorrelation
    
    Estimated covariances      =         1          Number of obs     =        100
    Estimated autocorrelations =         0          Number of groups  =          5
    Estimated coefficients     =        26          Time periods      =         20
                                                    Wald chi2(25)     =    1852.07
    Log likelihood             = -551.8661          Prob > chi2       =     0.0000
    
    ------------------------------------------------------------------------------
          invest |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
          market |   .1260306   .0199352     6.32   0.000     .0869582    .1651029
           stock |   .3617764    .030956    11.69   0.000     .3011038    .4224491
                 |
         company |
              2  |   127.6598   71.85344     1.78   0.076    -13.17037      268.49
              3  |  -114.3793   49.70343    -2.30   0.021    -211.7962   -16.96232
              4  |   100.1193   72.18316     1.39   0.165    -41.35707    241.5957
              5  |   221.2348   48.80957     4.53   0.000     125.5698    316.8998
                 |
            time |
              2  |  -39.55323   40.75388    -0.97   0.332    -119.4294    40.32291
              3  |  -88.14978   45.33628    -1.94   0.052    -177.0073    .7077036
              4  |  -68.78843   38.55563    -1.78   0.074    -144.3561    6.779217
              5  |  -119.1439   40.40383    -2.95   0.003    -198.3339   -39.95381
              6  |  -89.04318   41.24527    -2.16   0.031    -169.8824   -8.203932
              7  |  -23.11078   40.68394    -0.57   0.570    -102.8498    56.62828
              8  |   -13.7191   38.85216    -0.35   0.724    -89.86795    62.42974
              9  |  -54.27664   39.62219    -1.37   0.171    -131.9347    23.38144
             10  |  -56.97186   39.81186    -1.43   0.152    -135.0017    21.05794
             11  |  -73.18178   41.14196    -1.78   0.075    -153.8185    7.454967
             12  |  -31.06858   41.96616    -0.74   0.459    -113.3207    51.18358
             13  |  -30.44892   39.47631    -0.77   0.441    -107.8211    46.92323
             14  |  -35.47242   39.77744    -0.89   0.373    -113.4348    42.48992
             15  |  -85.85061   40.23306    -2.13   0.033     -164.706   -6.995251
             16  |  -81.26772   40.58741    -2.00   0.045    -160.8176   -1.717866
             17  |  -63.89134   42.50002    -1.50   0.133    -147.1899    19.40717
             18  |  -62.64179   43.57669    -1.44   0.151    -148.0505    22.76696
             19  |  -68.87581   47.70983    -1.44   0.149    -162.3853    24.63373
             20  |  -109.4707   48.44522    -2.26   0.024    -204.4215   -14.51978
                 |
           _cons |  -113.0192   76.14074    -1.48   0.138    -262.2523    36.21389
    ------------------------------------------------------------------------------
    
    . testparm i.time
    
     ( 1)  2.time = 0
     ( 2)  3.time = 0
     ( 3)  4.time = 0
     ( 4)  5.time = 0
     ( 5)  6.time = 0
     ( 6)  7.time = 0
     ( 7)  8.time = 0
     ( 8)  9.time = 0
     ( 9)  10.time = 0
     (10)  11.time = 0
     (11)  12.time = 0
     (12)  13.time = 0
     (13)  14.time = 0
     (14)  15.time = 0
     (15)  16.time = 0
     (16)  17.time = 0
     (17)  18.time = 0
     (18)  19.time = 0
     (19)  20.time = 0
    
               chi2( 19) =   22.09
             Prob > chi2 =    0.2796
    
    .
    -testparm- outcome does not support the evidence of the joint significance of -i.time-.
    Kind regards,
    Carlo
    (Stata 18.0 SE)

    Comment


    • #17
      Qurat ul Ain could you please share do.file in email at aaajku @ gmail .com

      Comment


      • #18
        Originally posted by Zahrah Rafique View Post
        Hi. I want to jump in. If the Hausman test indicates the FE model is used, and after testing for hetero, auto, and cross sectional dependence all 3 are there. Can I use xtgls dep indep, panels(hetero) corr(ar1)? Provided I have a long panel. My T is greater than N?
        Hi zahra, you got the answer?

        Comment


        • #19
          Hello Mr. Lazzaro,

          I am very new to stata and doing undergrad research. My dependent variable is NPLs (Non-performing Loans) and 6 independent variables - M3 = Money Supply, ER= Exchange Rate, GDP= GDP growth rate, BUG= Govt Budget, INF =Inflastion Rate, and DEBT = Public Debt. I have chosen 8 countries and the period of 12 years. After the Hausman test when I run the fixed effects model, I did not have any cross-sectional interdependence or autocorrelation but I have only a heteroscedasticity problem in the panel data. Now can I remove heteroscedasticity by using command- xtgls NPLs M3 ER GDP BUG INF DEBT under the FGLS method? To inform you that- I have a strongly balanced dataset which is T>N.

          note: I am very noobie so I run the Hausman test only. Please help me!

          Comment


          • #20
            Saom:
            if you have a T>N panel dataset and you're interested in the -fe- estimator, I would consider -xtregar-, that, unfortunately, does not allow you to manage heteroskedasticity.
            Despite probably being a bit too near to the limit, I would stick with -xtreg,fe- with robust or clustered standard errors (both option do the very same job under -xtreg-).
            Kind regards,
            Carlo
            (Stata 18.0 SE)

            Comment


            • #21
              Thank you for your response! by this time I have run -xtreg,fe- with both robust and clustered standard errors. but unfortunately, it is showing that- Prob > F is 0.086, which indicates that this is not statistically significant and one and only the P-value of GDP is significant (table below).

              Regression results
              NPLs Coef. St.Err. t-value p-value [95% Conf Interval] Sig
              M3 .0003 .165 0.00 .999 -.389 .39
              ER -.0572 .078 -0.73 .486 -.241 .127
              GDP -.7327 .251 -2.91 .023 -1.327 -.138 **
              BUG .2072 .197 1.05 .329 -.26 .674
              INF -.1173 .064 -1.84 .109 -.268 .034
              DEBT -.0335 .053 -0.63 .547 -.159 .092
              Constant 19.709 6.457 3.05 .019 4.439 34.978 **
              Mean dependent var 7.928 SD dependent var 5.947
              R-squared 0.190 Number of obs 96
              F-test 3.109 Prob > F 0.086
              Akaike crit. (AIC) 570.403 Bayesian crit. (BIC) 585.789
              *** p<.01, ** p<.05, * p<.1
              In this circumstance, I have little bit changed the approach. Now, at first, I have run the Pooled OLS, then I used Breusch-Pagan / Cook-Weisberg test for heteroskedasticity, and again I found my dataset is heteroskedastic. After that, I created Log value for my dependent variable which is NPLs2 (Non-performing Loans). Then with that Log dependent variable, I run the Hausman test. Even though this time Hausman test suggested the Random Effect Model, but now Prob > chi2 is showing 0.000 and 4 of the independent variables are showing significance (table below). Now my question is my new approach correct?
              NPLs2 Coef. St.Err. t-value p-value [95% Conf Interval] Sig
              M3 -.02 .009 -2.26 .024 -.036 -.003 **
              ER -.003 .004 -0.62 .538 -.011 .006
              GDP -.105 .017 -6.07 0 -.139 -.071 ***
              BUG .039 .023 1.70 .09 -.006 .084 *
              INF -.036 .013 -2.79 .005 -.061 -.011 ***
              DEBT -.002 .006 -0.40 .689 -.014 .009
              Constant 4.282 .627 6.83 0 3.053 5.51 ***
              Mean dependent var 1.825 SD dependent var 0.757
              Overall r-squared 0.220 Number of obs 96
              Chi-square 43.712 Prob > chi2 0.000
              R-squared within 0.342 R-squared between 0.135
              *** p<.01, ** p<.05, * p<.1
              Last edited by Saom Shawleen; 25 Aug 2021, 09:36.

              Comment


              • #22
                Saom:
                did you run the last codes you provided with Stata?
                Kind regards,
                Carlo
                (Stata 18.0 SE)

                Comment


                • #23
                  Sorry..I could not understand your question. Can you please elaborate on the code that you are asking about? actually, I have very little idea about this research software. It would be a great help if you walk me through it. In the last answer, I tried to explain that I have run OLS model first and then created a Log variable for my dependent variable (NPLs2), and instead of NPLs, I used log dependent variable (NPLs2) to conduct Hausman Test and then Hausman Test suggested me the RE model and I posted the RE model table in my last reply.
                  Last edited by Saom Shawleen; 25 Aug 2021, 09:54.

                  Comment


                  • #24
                    Saom:
                    1) Regression results: these are the results from an OLS (not from -xtreg,fe-).Moreover, this is not the usual outcome table that Stata gives back. That said, if you have panel data and you consider the observations as independent, your estimates are unreliable, as you actually fail to consider that panel structure of your dataset.
                    2) the second table (which, again, does not resmple the usual outcome table that Stata gives back) refers to the outcome of a (fixed effect?) panel data regression.
                    As a general advice, panel data commands are the first choice when you deal with panel datasets.
                    Kind regards,
                    Carlo
                    (Stata 18.0 SE)

                    Comment


                    • #25
                      I declared the dataset as panel data first and then I did all the tests. here, please go through attached the pictures. I am sending all the pictures and due to the picture limit issue I am posting another reply which contains the first table that I mentioned in my reply.

                      1) Regression results (table-1): FE robustness
                      Click image for larger version

Name:	Screenshot (103).png
Views:	1
Size:	340.0 KB
ID:	1624848
                      Click image for larger version

Name:	Screenshot (104).png
Views:	1
Size:	333.7 KB
ID:	1624849
                      Click image for larger version

Name:	Screenshot (105).png
Views:	1
Size:	324.7 KB
ID:	1624850
                      Click image for larger version

Name:	Screenshot (106).png
Views:	1
Size:	339.6 KB
ID:	1624851
                      Click image for larger version

Name:	Screenshot (107).png
Views:	1
Size:	329.1 KB
ID:	1624852

                      Comment


                      • #26
                        This is the robustness table about which I was talking. and after that, I went for a new approach. I am sending the pictures of the new approach as well

                        Click image for larger version

Name:	Screenshot (108).png
Views:	1
Size:	336.3 KB
ID:	1624855

                        Comment


                        • #27
                          The second table: please refer to the reply (In this circumstance, I have little bit changed the approach. Now, at first, I have run the Pooled OLS, then I used Breusch-Pagan / Cook-Weisberg test for heteroskedasticity, and again I found my dataset is heteroskedastic. After that, I created Log value for my dependent variable which is NPLs2 (Non-performing Loans). Then with that Log dependent variable, I run the Hausman test. Even though this time Hausman test suggested the Random Effect Model, but now Prob > chi2 is showing 0.000 and 4 of the independent variables are showing significance (table below). Now my question is my new approach correct?)
                          Click image for larger version

Name:	Screenshot (109).png
Views:	1
Size:	387.5 KB
ID:	1624859
                          Click image for larger version

Name:	Screenshot (110).png
Views:	1
Size:	381.4 KB
ID:	1624860
                          Click image for larger version

Name:	Screenshot (111).png
Views:	1
Size:	387.3 KB
ID:	1624861
                          Click image for larger version

Name:	Screenshot (112).png
Views:	1
Size:	367.1 KB
ID:	1624862
                          Click image for larger version

Name:	Screenshot (113).png
Views:	1
Size:	351.0 KB
ID:	1624863

                          Comment


                          • #28
                            based on the suggestion of the Hausman test, I again run that Random effect model and found significance in the P value.
                            Click image for larger version

Name:	Screenshot (114).png
Views:	1
Size:	379.0 KB
ID:	1624865

                            Comment


                            • #29
                              Saom:
                              two issues here:
                              1) if you have panel data with a continuous regressand, you should go -xtreg- first;
                              2) if you detect heteroskedsticity and/or autocorrelation, you should impose -robust- or -vce(cluster clusterid)- standard errors, that -hausman- does not support.
                              Hence you should compare -fe- vs. -re- via the community-contributed module -xtoverid- (just type -search xtoverid- to spot and install it).
                              As an aside, please do not post screenshots, but use CODE delimiters insteas (as per FAQ). Thanks.
                              Kind regards,
                              Carlo
                              (Stata 18.0 SE)

                              Comment


                              • #30
                                Originally posted by Carlo Lazzaro View Post
                                Saom:
                                two issues here:
                                1) if you have panel data with a continuous regressand, you should go -xtreg- first;
                                2) if you detect heteroskedsticity and/or autocorrelation, you should impose -robust- or -vce(cluster clusterid)- standard errors, that -hausman- does not support.
                                Hence you should compare -fe- vs. -re- via the community-contributed module -xtoverid- (just type -search xtoverid- to spot and install it).
                                As an aside, please do not post screenshots, but use CODE delimiters insteas (as per FAQ). Thanks.
                                At first, I am extremely apologized for sending the screenshots.

                                I am dealing with a panel dataset which is T>N. I run go -xtreg- first; Then I tried my best to use -xtoverid- command but I found severe difficulties while dealing with -xtoverid-. It requires creating some dummy variables which I was unable to do. Hence, I run Hausman test and Hausman test suggested me to follow fixed effect model. Based on fixed-effect model, I found there is no autocorrelation and cross-sectional dependence but only the heteroscedasticity problem is there. I run both -robust- and -vce(cluster clusterid)- standard errors but this time the result shows that Prob>F is .0818 which is not statistically significant.

                                Please, suggest me what should I do now. Thank you.
                                Last edited by Saom Shawleen; 26 Aug 2021, 00:07.

                                Comment

                                Working...
                                X