Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I noticed once the number of control variables is larger than the number of clusters, then the F value will be missing. When calculating F(k, n) with clusters, k seems to be the number of control variable while n becomes the number of clusters minus the number of control variables. Could this be the case sometimes the F-value is missing? Thank you.

    Comment


    • #17
      Yes. That's exactly right. When you use clustered error structure,the degrees of freedom is the number of clusters minus the number of variables variables, and this is reflected in the F-statistic. When the degrees of freedom are 0 or negative, you cannot get an F-test.

      Comment


      • #18
        Claire:
        you might be interested in taking a look at -help j_robustsingular-.
        Kind regards,
        Carlo
        (Stata 18.0 SE)

        Comment


        • #19
          Thus, according to #9, if one cannot calculate the F-statistic in the regression analysis, this can be ignored and the estimates used once "testparm" gives consistent results?

          Comment


          • #20
            Yes, if you are using cluster robust errors and the number of parameters in the model exceeds the number of clusters minus 1, then you will have no degrees of freedom for estimation and the overall model F-statistic will be missing. You can still rely on the t- or z- tests given in the regression output for individual parameter hypothesis tests. To test a group of parameters you can indeed use -testparm-; however it, too, will give missing values if the number of parameters you try to test exceeds the number of cluster minus 1. You can't get around the fact that you are limited in the number of parameters you can test simultaneously, but you can rely on tests that don't exceed that limit.

            Comment


            • #21
              Thank you!

              Comment


              • #22
                Another question: Clyde said that "...but you can rely on tests that don't exceed that limit". What would those other tests be?

                Comment


                • #23
                  Well, confining our attention to linear tests, any tests involving -test- or -testparm- that only impose fewer than #clusters-1 constraints on the coefficients in the model. So, for example, if you have a regression model with 10 variables, and corresponding coefficients b1, b2, ...., b10, and there are only 4 clusters (which would not be a good use of clustered variance estimation, but I want to keep the numbers small for this illustration) then tests like
                  Code:
                  test b1 = b2 = b3 = b4
                  would give you a result, and one you can rely on, because the condition b1 = b2 = b3 = b4 amounts to exactly three independent constraints on the coefficients, namely: b1 = b2, b2 = b3, and b3 = b4. (The additional constraints that are implied by this, such as b2 = b4, are not independent constraints, so they don't "count" here.) But
                  Code:
                  testparm b1 b2 b3 b4
                  will fail, giving missing value as a result, because now there are four independent constraints: b1 = 0, b2 = 0, b3 = 0, b4 = 0, and that exceeds #clusters - 1.

                  Fortunately, you don't have to count constraints yourself, and you don't have to worry about which tests are valid. -test- and -testparm- have been well coded so that if you exceed the number of allowable constraints, the result will be missing value. If you are within the number of constraints allowed, you will get a value, and you can believe it (to the extent you can believe any statistic!)

                  Comment


                  • #24
                    Thank you Clyde for your fast response!!!

                    When using the -test-command it still drops constraints and then gives me:

                    chi2( 29) = 52.42
                    Prob > chi2 = 0.0049

                    Is the dropping of constraints fine? If I understand it right, it can then only fulfil the conditions without these constraints and therefore the test does not more than individual testing!?

                    Comment


                    • #25
                      Please show the full and exact -test- command that you used. Stata does not drop constraints to "get under the limit" imposed by the number of clusters. When you go over that limit, it returns a missing value for the test statistic. It drops constraints when they are not independent. So it should be the case that the dropped constraints are simply redundant: they are automatic consequences of the constraints that were not dropped. If you show the details, I can point out more exactly what happened.

                      Comment


                      • #26
                        Hello

                        I had a similar problem and found this thread useful.

                        If I am running this model: xtreg dv iv1 iv2 iv3 i.year cluster(country), fe

                        and I want to use testparm iv1 iv2 iv3 after the clustered FE regression above (because my F statistic isn’t showing up), can I run testparm right after the xtreg command above?

                        I read somewhere that testparm should be run after a reduced regression, but I am not sure if that applies in this case.


                        Thanks

                        Comment


                        • #27
                          If -xtreg dv iv1 iv2 iv3 i.year cluster(country), fe- is really your model (and you haven't just simplified it here), and you are not getting N F statistic, it implies that you have only 3 or fewer countries in your data set. In that case, the cluster robust standard error is simply not valid (it requires a large number of clusters) and you shouldn't use it at all.

                          Comment


                          • #28
                            Many thanks for your prompt reply.

                            I have in fact simplified it for here. its more like -xtreg dv iv1 iv2 iv3 iv5 iv6 iv7 iv8 iv9 iv10 iv11 iv12 iv13 iv14 iv15 iv16 i.year cluster(country), fe-


                            I have 16 independent variables and 78 countries. (When I run it with only 3 independent variables, I am still faced with a missing F statistic).




                            Fixed-effects (within) regression Number of obs = 372

                            Group variable: code Number of groups = 78




                            R-sq: within = 0.1918 Obs per group: min = 1

                            between = 0.2238 avg = 4.8

                            overall = 0.2186 max = 9




                            F(23,77) = .

                            corr(u_i, Xb) = -0.0444 Prob > F = .




                            (Std. Err. adjusted for 78 clusters in code)

                            Comment


                            • #29
                              Please post the complete output Stata gave you. Read the Forum FAQ, with emphasis on #12 to learn about using code delimiters to make posted code and results more readable.

                              Added: Another cause for missing F-statistics after vce(cluster ...) is when you have a "dummy" variable that take on the value 1 in only a single observation from the estimation sample. Look into that.
                              Last edited by Clyde Schechter; 17 Dec 2018, 09:34.

                              Comment


                              • #30
                                Sorry about that! its my first time using the form and I am still trying to figure out my way around it. I don't have any dummy variables aside for the year fixed effects. I am basically testing the impact of country level elements such as property right regulations, taxes, etc. on business startups (dv).

                                The IVs are all numerical (indexes).



                                Here is the overall output:


                                . xtreg dv iv1 iv2 iv3 iv4 iv5 iv6 iv7 iv8 iv9 iv10 iv11 iv12 iv13 iv14 iv15 iv16 i.year, cluster(code) fe




                                Fixed-effects (within) regression Number of obs = 372

                                Group variable: code Number of groups = 78




                                R-sq: within = 0.1918 Obs per group: min = 1

                                between = 0.2238 avg = 4.8

                                overall = 0.2186 max = 9




                                F(23,77) = .

                                corr(u_i, Xb) = -0.0444 Prob > F = .




                                (Std. Err. adjusted for 78 clusters in code)

                                --------------------------------------------------------------------------------------------------

                                | Robust

                                dv | Coef. Std. Err. t P>|t| [95% Conf. Interval]

                                ---------------------------------+----------------------------------------------------------------

                                iv1 | -.0489022 .0606747 -0.81 0.423 -.1697209 .0719165

                                iv2 | .1339367 .2970449 0.45 0.653 -.4575554 .7254287

                                iv3 | -2.10723 .997487 -2.11 0.038 -4.09348 -.1209793

                                iv4 | .1106484 .0366675 3.02 0.003 .037634 .1836627

                                iv5 | -.0026083 .0179251 -0.15 0.885 -.0383017 .033085

                                iv6 | 1.069398 .3384452 3.16 0.002 .3954677 1.743329

                                iv7 | .1294031 .9938728 0.13 0.897 -1.84965 2.108457

                                iv8 | -.843901 .4557007 -1.85 0.068 -1.751317 .063515

                                iv9 | 1.773365 .7521569 2.36 0.021 .2756291 3.271101

                                iv10 | .2974538 .4061802 0.73 0.466 -.5113543 1.106262

                                iv11 | -.3885775 .7782453 -0.50 0.619 -1.938262 1.161107

                                iv12 | 3.35e-14 2.86e-13 0.12 0.907 -5.36e-13 6.03e-13

                                iv13 | -.1780276 .075653 -2.35 0.021 -.3286719 -.0273832

                                iv14 | -.4987884 1.777302 -0.28 0.780 -4.037848 3.040271

                                iv15 | 1.468546 1.392599 1.05 0.295 -1.304474 4.241566

                                iv16 | -2.715568 2.679906 -1.01 0.314 -8.051943 2.620808

                                |

                                year |

                                2007 | -.1322855 .7268369 -0.18 0.856 -1.579603 1.315032

                                2008 | .6653619 .7184461 0.93 0.357 -.765247 2.095971

                                2009 | .4822669 .875941 0.55 0.584 -1.261954 2.226488

                                2010 | -.2267476 .8766323 -0.26 0.797 -1.972345 1.51885

                                2011 | 1.825553 .9071357 2.01 0.048 .0192155 3.631891

                                2012 | 2.194199 .9171791 2.39 0.019 .3678622 4.020536

                                2013 | 2.71085 .9479954 2.86 0.005 .8231496 4.59855

                                2014 | 2.592016 .8525771 3.04 0.003 .8943185 4.289714

                                |

                                _cons | 8.588517 8.848354 0.97 0.335 -9.030807 26.20784

                                ---------------------------------+----------------------------------------------------------------

                                sigma_u | 6.6338123

                                sigma_e | 2.5614295

                                rho | .87025647 (fraction of variance due to u_i)

                                --------------------------------------------------------------------------------------------------




                                Do you think testparm would work after this model? Or does testparm only work after a reduced regression model? I read that somewhere but I am unsure of whether that applies to my case here.








                                Comment

                                Working...
                                X