Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Generating Residual vs Predictor Plots for Panel and testing AIC across diverse models?

    Hi Stata folks,

    I'm running some panel models using the following command:
    xtreg y x1 x2 x3, re mle vce(bootstrap)
    and then also using the xsmle package (Belotti, Hughes, Mortari, 2013): xsmle y x1 x2 x3, wmat(spm_name) model(war) re vie(dkraay)
    1. Since I can't get standard post-estimation plots (rvpplot avplots) to run after these kind of regressions, what's the best approach to obtaining accurate diagnostics? I'm trying to assess linearity, examine residuals, and so forth.
    2. Is it appropriate to compare AIC and BIC across two models with the same regressors, but different estimation approaches? For instance, in this case, the RE model returns much lower AIC scores than the xsmle model.
    Thanks!

    -nick


  • #2
    Residuals versus predictor plots should be easy whenever there is a concept of a predicted value.

    1. If residuals are defined and available after modelling, go to step 3.

    2. If predictions are defined, then calculate residuals as observed - predicted, or in other form as the literature implies.

    3. Plot residuals against predictors. crossplot (SSC) should help here.

    AIC is not my stuff.

    Comment


    • #3
      Thanks Nick -- appreciated. I'm still trying to figure out the AIC issue. I know it's based on log likelihood, but I'm not sure whether this is calculated differently across different models?

      Comment


      • #4
        Nick (Cain),

        So long as you do not directly transform the dependent variable, the log likelihoods will be comparable (see Burnham & Anderson, 2002).

        Thus, transforming the dependent variable from DV to ln(DV) changes the data - making the model comparison suspect. However comparing glm DV, family(gaussian) link(identity) to glm DV, family(gaussian) link(log) changes only the model, not the data, and allows for a conceptually reasonable comparison.

        The AIC is not computed differently across models - but the object its comparing across models can change, rendering the comparison less useful.

        - joe

        Reference
        Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media.
        Joseph Nicholas Luchman, Ph.D., PStatĀ® (American Statistical Association)
        ----
        Research Fellow
        Fors Marsh

        ----
        Version 18.0 MP

        Comment


        • #5
          Joe -- thanks for your comments and the reference. Just to clarify: Ceteris paribus, changing how the standard errors are estimated (e.g., using Driscoll-Kraay or bootstrapped) will not change the AIC?

          Comment

          Working...
          X