Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fixed or random effect models for panel data with only two time points?

    I am analyzing a Panel data of children (aged 0 to 15 at 2010 baseline survey) surveyed every two years. I have three waves of data in 2010, 2012 and 2014. At the 2010 and 2014 wave, children over 10 years old are each given a math test and a words test. There is also information about family structure at each wave, e.g. whether the parents are living at home due to migration. I am trying to understand how parental absence due to migration affects children's test scores in 2010 and 2014. Test scores are continuous. The Parental absence variable has four categories (both absent, father absent, mother absent, both present). In the final data, children with the test scores include: (1) children aged 10 to 15 in year 2010, (2) children 10-15 in 2010 with follow up scores in year 2014 (they are now about age 14 to 19), (3) children 6 to 9 in 2010 who are first tested in 2014 (when they are about 10 to 14). Only group 2 (10 to 15 at each wave) have test scores at both years. (Some children 10-15 in 2010 have missing test scores in 2014 too.) My focal research question is: how various type of parental absence due to emigration (with 5 categories: both at home, only mother at home (father emigrant), only father at home (mother emigrant), no parent at home (both emigrant), divorce/death of parents) affect children's test performance.

    I first use the STATA xt regression random effect procedures to conduct the analysis. Wordtest2yr is the test score. Absence_5cat is the IV variable for parental absence. Wave is the survey panel indicator. All other covariates (parents education, age) are time variant, and parental absence before age 3 (livenoparbf3) is time invariant. The following two syntax with robust standard errors generate the same results, and the effects of "absence_5cat" are significant and sensible :

    xi: regress wordtest2yr i.sex c.age_w1w3##c.age_w1w3 i.absence_5cat i.age_baba i.age_mama_x i.edu_baba i.edu_mama i.livenoparbf3 i.wave if urban_com==1, vce(cluster pid)
    xtreg wordtest2yr i.sex c.age_w1w3##c.age_w1w3 i.absence_5cat i.age_baba i.age_mama_x i.edu_baba i.edu_mama i.livenoparbf3 i.wave if urban_com==1, re vce(robust) theta

    As I think fixed effect models should produce more robust and unbiased results, I run the fixed effect model, as follows:

    xtreg wordtest2yr i.sex c.age_w1w3##c.age_w1w3 i.absence_5cat i.age_baba i.age_mama_x i.edu_baba i.edu_mama i.livenoparbf3 i.wave if urban_com==1, fe vce(robust)

    However, the fixed effect models generate very different coefficients for absence in both magnitude and sign. Although the magnitudes are large enough, but none are significant,, and make no sense. I have also tried to remove the "wave" indicator, and get similar results. Results of the random effect and fixed effect models are shown below:

    ** Random Effect:
    wordtest2yr Coef. Std. Err. z P>z [95% Conf. Interval]
    absence_5cat
    1 1.52727 .7799622 1.96 0.050 -.0014281 3.055968
    2 .2929886 1.316119 0.22 0.824 -2.286557 2.872535
    3 1.31037 .5621309 2.33 0.020 .208614 2.412127
    4 1.689032 .7019148 2.41 0.016 .3133047 3.06476
    ** Fixed effect:
    absence_5cat
    1 -1.216155 3.046883 -0.40 0.690 -7.193684 4.761373
    2 2.254528 2.819032 0.80 0.424 -3.275991 7.785047
    3 -1.101228 2.848645 -0.39 0.699 -6.689842 4.487385
    4 1.554859 3.088102 0.50 0.615 -4.503534 7.613251

    In a post elsewhere, I was advised to adopt a Difference-in-Differences approach and create treatment and control groups for parental absence between 2010 and 2014. As there are five categories of parental absence, there will be many kinds of change combinations between 2010 and 2014. Also I understand the difference in difference approach will only keep the children who have both 2010 and 2014 test scores. That will mean many children with only one test won't be included in the analysis.

    I would like to hear about your valuable suggestions on what is the best way to proceed.

  • #2
    You're most likely to get a useful answer if you follow the FAQ on asking questions - provide Stata code in code delimiters, readable Stata output (fixed spacing fonts help), and sample data using dataex. It is also better if you can shorten your post to highlight the critical issues and even run fewer variables if the results still show your problem.

    I think Clyde has commented on two-period panel models on this listserve in the last few months. You should look up his post.



    Comment


    • #3
      It is important to remember that a fixed-effects regression is a within-panel (in this case, within-child) regression. It is not uncommon for the effect of a regressor within panels to differ from its effect between panels, even to be opposite in sign. That is, in this case, a change in the state of absence of parents, can have a different effect from the effect of being in that same state. Here's a very simple example that demonstrates the general principle:

      Code:
      * Example generated by -dataex-. To install: ssc install dataex
      clear
      input float(id x y)
      1  1 15.202559
      1  2 14.104263
      1  3  13.02977
      2  4 17.827787
      2  5  16.92708
      2  6 16.086184
      3  7 20.976065
      3  8 20.051655
      3  9 18.818798
      4 10 23.898487
      4 11 23.109537
      4 12 22.133265
      end
      
      graph twoway scatter y x, mlabel(id) msym(i) mlabpos(0)
      
      xtset id
      xtreg y x, re
      xtreg y x, fe
      Study the graph, and then look at the two regression outputs, and you will see what is going on.

      Something like this may be happening in your data as well, which, in my opinion (from outside your discipline) is actually a more interesting finding!

      Comment


      • #4
        Dear Clyde Schechter, If our data does exhibit this phenomenon, from which perspective can we interpret the finding?

        Click image for larger version

Name:	fixandrandom.png
Views:	1
Size:	45.2 KB
ID:	1432902
        Last edited by Chen Samulsion; 05 Mar 2018, 22:37.

        Comment


        • #5
          Both. You just have to be clear about which perspective you're taking when you describe your results. One reflects within-person differences, the other reflects cross-person differences. The effects of getting married can be different from the effects of being married. The effect of moving to Florida can be different from the effect of being a Florida resident. The effect of having a parent become absent can be different from the effect of having an absent parent. Both perspectives are correct, but neither describes the other, so it is important to make explicit which perspective you are taking when discussing results.

          Comment


          • #6
            Thank you Clyde Schechter. In a past post more than half year ago, you talked about year dummies in fixed-effects model:

            In finance and economics, year dummies (or quarter, or month, etc.) are the norm. There is a real basis for this: these variables are actually subject to appreciable unpredictable shocks over short periods of time, and you need to account for those. But in my discipline, epidemiology, that is usually not the case. Most of our variables are either stable over time, or exhibit consistent directional trends over time, with only negligible short-term variation. Using short-time-period indicator variables is generally not the best way to capture that in your model. So, again, I think you need to think about how your variable actually behaves over long and short time periods and model accordingly. In finance and economics, that doesn't usually require a lot of thought, but occasionally it might, and in other disciplines it almost always does.
            https://www.statalist.org/forums/for...-re-regression
            Does it suggest that if majority of covariants stable over time, thus give rise to small within-person difference, then the fixed-effects model will be poorly fitted to the actual data?
            Last edited by Chen Samulsion; 06 Mar 2018, 10:11.

            Comment


            • #7
              Does it suggest that if majority of covariants stable over time, thus give rise to small within-person difference, then the fixed-effects model will be poorly fitted to the actual data?
              I wouldn't put it that way. If the covariates are stable over time, there will be little within-person variation, and the fixed effects model will be ignoring the potentially larger variation between persons. But it doesn't mean that the mode will be poorly fit to the data. The fit could still be quite good: remember that the fixed-effects model only attempts to fit the within-person variation, and it will do that very well.

              Also, if some covariates are stable over time but others vary over time, then the fixed-effects model will still be an excellent way to examine the impact of those time-varying covariates.

              The comment you quoted was actually intended to make a different point. In that thread, there was discussion about whether, or how, to model time effects in a certain problem. My point there was that it depends on the nature of the time effects in the actual data generating process, and that the model should be chosen to best match and express that reality. It was not a comment about fixed-effects vs between or random effects.

              The main question is: what is your research question? Are you looking to study the within-person effects of these predictors? Or the between-person effects? Or perhaps both--in which case you may need to do two separate models (one -fe- and one -be-).

              Comment


              • #8
                Assume the panel data variation be composed of within-subject variation and between-subject variation. If in actual data the within-subject part is trivial or little (because of almost stable over time), how can the model (fe) fully capture the whole data strcture?

                Comment


                • #9
                  If in actual data the within-subject part is trivial or little, how can the model (fe) fully capture the whole data strcture?
                  It can't. You can even go beyond this: even when the within-subject part is large, -fe- always ignores the between-subject part. So -fe- will never capture the whole data structure, unless there is no between subject variation at all. It isn't intended to do that and doesn't pretend to.

                  I think the closest one can come to a model that captures both the between and within subject effects simultaneously would look something like this:

                  Code:
                  by person, sort: egen between_x = mean(x)
                  gen within_x = x - between_x
                  
                  xtset person
                  xtreg y between_x within_x, re
                  This model represents and separately estimates the between and within effects of x on y. Note that it uses -re-. It can't be done with -fe-, because with -fe- the between_x variables would be colinear with the fixed effects and would be omitted.

                  Comment


                  • #10
                    Get it, I'm more clear on this topic. Thank you very much.
                    Maybe I can cite Alfonso Sánchez-Peñalver as closing remarks:

                    By choosing fixed-effects we can only explain what happens across time on average. We cannot explain what happens across panels when a variable changes. And the point is that those effects may be completely different.
                    Last edited by Chen Samulsion; 06 Mar 2018, 12:39.

                    Comment


                    • #11
                      Hello Phil,

                      Thanks for your reminder. Sorry for my long post. I will heed your advice and keep my future posts short and clear.

                      Comment


                      • #12
                        Hi Clyde, Thanks for your reply and detailed answers to my post. .

                        That is, in this case, a change in the state of absence of parents, can have a different effect from the effect of being in that same state.
                        In my study about effect of parental absence (varname: tz_5cat) on test scores, I have five types of parental absence (0=both absent, 1=mom absent, 2=dad absent, 3=both present, 4=one/borth dead,divorced). The FE model output are shown below:

                        Code:
                        xtreg wordtest2yr i.sex_self_x c.age_self_x##c.age_self_x i.tz_5cat    if urban_com==1, fe //vce(robust)
                        
                        note: 1.sex_self_x omitted because of collinearity
                        
                        Fixed-effects (within) regression               Number of obs      =    1827
                        Group variable: pid                                  Number of groups   =    1352
                        
                        R-sq:  within  = 0.4104                         Obs per group: min =    1
                        between = 0.1995                                        avg =    1.4
                        overall = 0.2421                                           max =    2
                        
                        F(6,469)           =    54.42
                        corr(u_i, Xb)  = -0.0855                        Prob > F           =    0.0000
                                                        
                        
                        wordtest2yr                             Coef.          Std. Err.      t          P>t        [95% Conf. Interval]
                            
                        1.sex_self_x                          0  (omitted)
                           age_self_x                          6.621464   .6231128    10.63   0.000    5.397026    7.845903
                        c.age_self_x#c.age_self_x   -.1936534   .0218408    -8.87   0.000    -.2365714   -.1507355                          
                        tz_5cat 
                                                               1     -1.146223   2.326945    -0.49   0.623    -5.718751    3.426304
                                                               2     2.157628   2.816156     0.77   0.444    -3.376218    7.691474
                                                               3    -1.088708   2.043918    -0.53   0.595    -5.105078    2.927662
                                                               4     1.327469   2.469953     0.54   0.591    -3.526076    6.181014
                                                  
                        _cons                                   -26.92464   4.766899    -5.65   0.000    -36.29176   -17.55752
                            
                        sigma_u   5.4172048
                        sigma_e   4.3005192
                        rho   .61341464   (fraction of variance due to    u_i)
                            
                        F test that all u_i=0:     F(1351, 469) =     1.84           Prob > F    = 0.0000
                        Then can I interpret the coefficients from FE model as the effect of getting INTO that type of parental absence (from any other type) compared to getting into the refrence type of parental absence (tz_5cat=0, both absent)? For example, becoming "mother absent only" (=1) will decrease the wordtest score by 1.146 than becoming "both parents absent" (=0)?

                        My second question, although the coefficients are large, TEST does not show the tz_5cat to be a significant predictor in the FE model. In the RE model, one coefficient for tz_5cat "3.tz_cat" has a probability just below 0.05. However, the TEST (test 1.tz_5cat 2.tz_5cat 3.tz_5cat 4.tz_5cat) turns out a probability of prob=0.342 with chisq(4)=4.51. Does this mean that tz_5cat as a whole is not a significant predictor? Do I need to report the one significant coefficient 3.tz_cat4?

                        Based on your explanation above, the coefficients of parental absence in the RE is not about "getting into that type", but a mixture of getting into and being in that type. I wonder whether my interpretation is right?

                        I run a Hausman test to compare the FE and RE models, and the test generates an chi2(6)=9.6 and Prob>chi2=0.142. I suppose it indicates the FE and RE models are not much different. Then does it mean I can choose RE model instead?

                        I know that this is again a long post. But I have many questions and I hope I have made it clear. Look forward to your advice.

                        Comment


                        • #13
                          For example, becoming "mother absent only" (=1) will decrease the wordtest score by 1.146 than becoming "both parents absent" (=0)?
                          Right.

                          My second question, although the coefficients are large, TEST does not show the tz_5cat to be a significant predictor in the FE model. In the RE model, one coefficient for tz_5cat "3.tz_cat" has a probability just below 0.05. However, the TEST (test 1.tz_5cat 2.tz_5cat 3.tz_5cat 4.tz_5cat) turns out a probability of prob=0.342 with chisq(4)=4.51. Does this mean that tz_5cat as a whole is not a significant predictor? Do I need to report the one significant coefficient 3.tz_cat4?
                          Using statistical significance as a filter to decide what results to report is a disastrously bad practice. It is one of the main pillars of the crisis of reproducibility in science. When you do that, you cherry pick the data, selectively report overestimated effects, and many are even of the wrong sign when you are dealing with low power situations. If your research hypothesis was that tz_5cat as a whole is a determinant of test scores, then you should report your findings in full: all of the coefficients, confidence intervals, p-values if you wish, and joint-test p-value in their entirety. If your research hypothesis originally focused only on one category of tz_5cat, and the others were included just "for completeness" but are not of interest, then you should report the results for that one category only. This decision is based on the research hypothesis you formulated before you analyzed the data and should not be influenced by having seen the results.

                          Based on your explanation above, the coefficients of parental absence in the RE is not about "getting into that type", but a mixture of getting into and being in that type. I wonder whether my interpretation is right?
                          That is correct, the RE estimates are a weighted average of the within and between effects. In fact, in -xtreg, re- they are actually calculated that way--the FE and BE models are both run and the results of the two are averaged.

                          I run a Hausman test to compare the FE and RE models, and the test generates an chi2(6)=9.6 and Prob>chi2=0.142. I suppose it indicates the FE and RE models are not much different. Then does it mean I can choose RE model instead?
                          In some disciplines, the Hausman test is the ultimate arbiter of FE vs RE model selection, and if yours is one of those I'm not going to advice you to swim against the current., at least not until you are pretty senior in your area. If you have to follow the Hausman test, then the RE estimator would be used here.

                          If you are not in one of those disciplines, then the first consideration is whether your research goal is to evaluate within-person effects. If so, the FE estimator gives you that, and the RE estimator does not. So I would, in that case, go with FE regardless of what Hausman or anybody else says. Matching the analysis to the research goal is the most important thing and overrules any other consideration. Getting the right answer to the wrong question is not desirable.

                          If I am mainly interested in evaluating between effects, then if the RE and FE models are very similar, it is reasonable to use the RE model as it emphasizes the between effects (although it is also contaminated with within-effects). If you really need to assess pure between-person effects, then the BE estimator should be used. But, as I said, if the FE and RE models are very similar, then they are also similar to the BE model and there is a strong concordance between the within- and between-person effects of the variables.

                          In my discipline the Hausman test is rarely used, and I live by a principle that model selection is never to be based on a p-value. The p-value is an undifferentiated mush of sample size, variances, and differences, and you cannot tell what actually drives the p-value. For the purpose of distinguishing RE from FE, all that is relevant is the differences in the model coefficients. But there is no way to disentangle that from the other garbage in the p-value. My own approach is to look at the predicted values of the two models and see how well each of those fit the data. I tend to choose the better fitting model, provided the models do not have so many variables that we are just over-fitting the noise in the data.



                          Comment


                          • #14
                            Hi Clyde, Thanks for your sage advice on reporting the coefficients of variables and selection of FE and RE models. On reporting of the coefficients, you mentioned that:
                            If your research hypothesis was that tz_5cat as a whole is a determinant of test scores, then you should report your findings in full: all of the coefficients, confidence intervals, p-values if you wish, and joint-test p-value in their entirety.
                            As you suggested, I will report the findings of the variable of interest "tz_cat5" in full with the coefficients of each of the categories. In the RE model one category of tz_5cat "3.tz_cat" has a coefficient with probability just below 0.05, but the joint-test p-value of all the coefficients for tz_5cat is 0.342. It seems to show tz_5cat is not a significant predictor variable, however, there are significant differences between category 0 and 3. How should I explain the seemingly contradictory results to the reader?

                            Upon some further thought, I think it is better for me to choose the RE effect model over the FE model because there is only a small proportion of the sample who change their parental absence status between the 2 waves. The small sample may not have enough power to identify any significant effect. Also, the FE coefficients interpreted as "getting into that parental absence type" from any other type have not captured the possibility that effect of changes from different baseline (wave 1) parental absence types to an other type may be different. That means, the baseline parental absence at baseline should be considered and add to the model. How do I set this up in the FE model? Should I include a lag value of parental absence state "tz_5cat"in the model? Or maybe I can generate and add a variable showing the different types of parental absence status change and no-change between the two points, such as 0="0 to 0" (both parents absent, no change btw 2 time points), 1="0 to 1", 2="0 to 2" ...etc. It seems in both cases, the data in the baseline are lost and FE model should not work with only data in one time point.

                            You also mentioned that you tend to choose betting fitting models by looking at the predicted values.
                            "My own approach is to look at the predicted values of the two models and see how well each of those fit the data."
                            Can the 3 R-sq values be of some help here, since they are supposed to indicate the percentage of variance of dependent variable explained by the model?
                            If so, which of the 3 R-sq can we use? In the above FE model results, it shows both within and between R-sq values. I don't understand why a FE model can have between R-sq explaining power.

                            Thanks again.

                            Comment


                            • #15
                              How should I explain the seemingly contradictory results to the reader?
                              Well, this type of result seems contradictory only to people who don't actually understand what p-values mean. Unfortunately, that is nearly everybody--which is why I dislike reporting p-values so much. But, at least in some contexts, you can't avoid that. There are a couple of things you can say. One is the type I error issue. Since you have 5 categories in this variable, and you have a p-value for all but one of them, you have done 4 significance tests. If you Bonferroni correct those p-values, then the illusion of statistical significance goes away in this case. (By the way, Bonferroni correction of p-values for multiple tests was actually developed initially for precisely this purpose: correcting the post-hoc tests of individual level effects from a multi-level categorical variable. It has since gone on to be used more generally.) Another way of saying this is that the result for that one level may well just be a Type I error. A more fundamental issue, and one that is nearly impossible to convey to non-technical audiences, is that the critical region for the test of a 1df hypothesis is an interval, but the critical region of the test for a multiple df hypothesis is not the cross product of those intervals. Instead, the critical region is an ellipsoid, and, to make it even more complicated, if the predictors involved in the hypothesis are not orthogonal, the elllipsoid's axes may be oblique to the predictors, so geometrically there is no necessary connection between the significance of a single predictor and the significance of a multi-predictor test that includes it. The most fundamental issue of all is, of course, that statistical significance is a dubious construct in the first place. It starts from a p-value, which is itself a difficult to understand statistic because it confounds the impacts of actual effect size, sample size, and outcome variance--you can't really tell which of these is driving it. Then it take this very mushy statistic and converts it from a continuous variable to a dichotomy "significant vs not significant," which is just as meaningless as taking even a really good statistic and trashing it by imposing an arbitrary cutoff. So people end up misinterpreting statistical significance as meaning "there is an effect" vs "there is no effect," whereas in truth it means nothing of the kind. So it is important to remember that "statistical significance" is not a description of an effect; it is a highly distorted characterization of a complicated statistic that is influenced by the effect in question, but also by other things. It's low-grade sausage leftovers--it's not filet of beef. An accurate way of interpreting p-values is this: when the p-value is higher, it means that the precision with which the analysis was able to estimate the effect is lower. Above some point one might say that the analysis is so imprecise that it leaves us considerable doubt about even the direction of the effect. So we can say that the effect of that one level was estimated with enough precision to be reasonably confident of its sign, whereas the precision of the analysis for the effects combined is too low to pin down the direction of the (vector-valued) joint effect of all the levels. Which of these explanations (or combination of them) is most appropriate will depend on your audience.

                              Or maybe I can generate and add a variable showing the different types of parental absence status change and no-change between the two points, such as 0="0 to 0" (both parents absent, no change btw 2 time points), 1="0 to 1", 2="0 to 2" ...etc.
                              This approach appeals to me, but you may find that some of the possible transitions are too rare to estimate their effects in your data. It may be that you will have to group certain of the possible transitions together. Your 5 category variable offers you 5*5 = 25 possible transitions. On average the sample size for each will be just 1/25th of your total sample size. Most likely they will not all occur equally often and some of them will be very rare. So you may have to do a coarser characterization. For example one might consider categories like "both parents present to one or both absent" instead of the 3 distinct transitions this accounts for.

                              Including the baseline value as an effect in an FE model is not possible: it will be colinear with the fixed effects and will drop out of the regression.

                              If you use a random effects model, then you can include the baseline state as a variable in the model. But this may not be the best way to represent the transitions. To really capture that, you would need not just the baseline state but the baseline state and its interactions with the current state--which is really just a different way of including the 25 possible transitions, and it will raise the same problems as noted above.

                              As for the R2, the meanings of the three different R2 from an FE regression are given in the PDF documentation that comes with your Stata installation. In particular:
                              Reported as R2 within is the R2 from the mean-deviated regression.
                              Reported as R2 between is corr(xib
                              ; yi)2.
                              Reported as R2 overall is corr(xitb
                              ; yit)2.
                              As a measure of fit for this purpose, I would say the overall R2 is the best. It mostly closely characterizes who will the model matches the actual observed outcomes. You can see, by the way, that what is called the "between R2" is not really about between-panel effects, and I really don't know why they call it that.

                              Comment

                              Working...
                              X