Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The F model statistic has been reported as missing

    I ran a fixed effect model for my unbalanced panel data but my estimation results show an F model statistic reported to be missing. What is going on and how can I ratify this situation? Tony

  • #2
    Tony:
    see -help j_robustsingular-.
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      You could have shared data/output/command, as suggested in the FAQ.

      That said, this thread as welll as the links thereby shared may be helpful to you: https://www.statalist.org/forums/for...tic-is-missing
      Best regards,

      Marcos

      Comment


      • #4
        Help
        j_robustsingular says "
        Stata has done that so as to not be misleading, not because there is something necessarily wrong with your model.". What does this statement mean specifically? A bit more clarification will be very much appreciated, please. Tony

        Comment


        • #5
          You don't show the actual output you got, so it is difficult to know with certainty what is happening. Nevertheless, as Carlo and Marcos have noted, the commonest situation in which we find missing model F statistics is when an analysis has been run using cluster robust standard errors and the number of model predictors exceeds the number of clusters.

          The cluster robust variance-covariance matrix has rank equal to the number of clusters minus 1--this is in contrast to the ordinary variance-covariance matrix, whose rank is the number of observations minus 1. Because of the reduced rank of this matrix, you can only test hypotheses involving #clusters minus 2 or fewer constraints.

          There is nothing wrong with this. It doesn't mean your model is wrong or in any way problematic. It just means that you cannot do an overall model test because it has too many predictors for the number of degrees of freedom available. In most situations, the overall model F statistic isn't important anyway.

          That said, when this situation arises you probably should step back a minute and ask yourself:

          1. Is my model cluttered up with a lot of pointless predictors that could be eliminated?

          2. Is the number of clusters in my data sufficiently large to justify the use of cluster robust standard errors. It is understood that cluster robust standard errors are not valid when the number of clusters is too small. There is disagreement on just what "too small" means. Everyone would agree that 10 or fewer is too few. Some would insist that they should not be used unless you have 50 or even 100 clusters. And I've seen rules of thumb in between. But even without a universally endorsed cutoff on the number of clusters, asking yourself whether you have enough clusters to make sense is a good idea.

          Comment


          • #6
            Thank you Clyde. Your answer truly makes sense. My data size was small and therefore few number of clusters, but large number of predictors. I appreciate.

            Is there a way of testing for causality between the variables in my model? I am working with an unbalanced panel data, using fixed effects estimator. There are some significant relationships between my predicted and the predictor variables that I would like to test for causality or reversed causality. I tried using a user-created command -xtgcause- but it only works for strongly balanced panel data. Any ideas?

            Comment

            Working...
            X