Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Two issues regarding degrees of freedom in a fixed effects framework

    Dear all,

    I have two questions:

    1. I am conducting a Fixed Effects Regression the appropriateness of which I want to test for using the Ramsey RESET test. There are two previous posts on this topic which were very helpful:

    http://www.stata.com/statalist/archi.../msg01236.html
    http://www.stata.com/statalist/archi.../msg00612.html

    I now manually demeaned the data using the center function and regressed the (demeaned) dependent on the (demeaned) explanatory variables. Before conducting the RESET test I need to correct for the degrees of freedom lost by demeaning though -as mention in the earlier post by Prof Schaffer. Can somebody tell me how to do that?

    2. As I have evidence of serial autocorrelation as well as heteroskedasticity in my error terms, I have to use the vce(robust) function for my model. As my panel is quite unbalanced, I lose too many degrees of freedom to even calculate an F-Test then. I can test for the joint significance of all my parameters except for the constant using a partial F-Test though. Should I be worried about the inability of STATA to conduct an F-test? (Unfortunately balancing the panel is not an option - I'd be left with nothing then) Are there any suggestions on how to proceed in this situation?

    Thanks in advance for any help anyone can give. It will be much appreciated.

    Kind regards
    Judith

  • #2
    On #1 - if you are using Stata's estat ovtest, it will report a standard F that is calculated using the wrong degrees of freedom because it doesn't realise that you have partialled out the fixed effects. You can fix this by hand (it's just a multiplication). If you are using ivreset (available from SSC in the usual way), it will report a chi-square test stat and you won't need to do any adjustments.

    On #2 - I don't see why unbalancedness would cause problems for reporting an F test. Do you mean that you don't have very many clusters (panel members)? In that case you have a basic problem, because the cluster-robust covariance estimator relies on the number of clusters going off to infinity. What are N and T in your panel?

    PS: it's "Stata", not "STATA".

    Comment


    • #3
      Dear Prof Schaffer,

      thanks a lot for your answer!

      On #1: I am using estat ovtest and was overcomplicating there, got how to do it now.

      On#2: Yes, that was what I was getting at. N is 48 and T is 9.

      Kind regards
      Judith

      Comment


      • #4
        N=48 is OK. My guess is that the missing F stat is not because you have more than 48 regressors, but rather you have something like a singleton regressor (=1 for one observation, =0 for the other 47). If the missing F stat is in the regression header, then it's probably highlighted and if you click on it you'll get an official Stata comment on the problem. Or you can get the message by using the online help: help j_robustsingular will do it. There's also a helpful post on Statalist from many years ago by Vince Wiggins from StataCorp if you can track it down.

        Comment


        • #5
          Thanks for pointing to that post (http://www.stata.com/statalist/archi.../msg00594.html). It was very helpful to read and excatly that seems to be the problem. When I exclude one variable, I get an F-stat. My plan of action was to drop the singletons out (there is the nice hdfe command developed by Sergio Correia). As I get this error message however (no observations left after dropping singletons (270 obs. dropped) insufficient observations r(2001), I think I can either drop the variable completely (which I cannot do on theoretical grounds) or calculate a partial F-stat and point to the fact that my results are to be interepreted with caution.

          Kind regards
          Judith

          Comment


          • #6
            It was a problem of measurement error and after correcting for it everything displays fine!! Thanks for the help!!

            Comment


            • #7
              Originally posted by Judith Lechermann View Post
              It was a problem of measurement error and after correcting for it everything displays fine!! Thanks for the help!!
              what is the measurement error? I also face with the problem when run the hdfe. It was shown that "no observations left after dropping singletons (77390 obs. dropped)
              insufficient observations
              r(2001);"

              Comment


              • #8
                Cheng Su:
                welcome to this forum.
                As per FAQ, you're kindly requested to report the user-written command you're using (net describe hdfe, from(http://fmwww.bc.edu/RePEc/bocode/h)).
                That said, have you already checked that you do not have a missing value related issue with your data, after removing singletons?
                Kind regards,
                Carlo
                (Stata 19.0)

                Comment


                • #9
                  Hi CHeng,
                  The error you find is literally what it means. First of all, if instead of using hdfe, you use "reghdfe", which is the most uptodate program by Sergio Correira.
                  Second, The problem you describe is not a problem regarding the command, but rather regarding your data. In a common regression with fixed effects, you assume that you have more than 1 observation per fix effect to be able to identify it. If you only have one, you have a singleton (a unique observation for which the dummy/FE category is defined). Running a model with or without the singleton will give you the same results, because they add no information to the estimation of other parameters in the model.
                  Now, what happens in your data is that the command first looks for the singletons for each FE list you are trying to absorb, and drops them out of the sample, (again because they add nothing to the estimation). It may seem that iteratively dropping singletons leaves your data empty which is why you have the problem.
                  You could still use other procedures to estimate the model without dropping the singletons, but you will not get any meaningful results from it. basically you will attempt to absorb more fixed effects than information is available in the data.
                  Hope this helps.
                  Fernando

                  Comment

                  Working...
                  X