Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • empirical bayes estimator or shrinkage

    Hi,
    I wonder if it is already built into Stata how group-level averages (or leave-out means) are often adjusted for more noise in smaller groups. Is such a correction something easily available from -mixed-, or even simpler "multilevel model" tools?

    E.g. one can have data on judge-level leniency (share of cases with positive binary outcome) but shrink it towards the population means. How do I get the adjusted (leave-out) means?

    I am grateful for any pointers, thanks in advance!

  • #2
    What do you mean by judge-level leniency? Are you referring to a case where multiple judges are scoring the same observation or something different? You can use predict after mixed effects models to get the shrunken estimates of the random effects if that was what you were referring to otherwise.

    Comment


    • #3
      Thanks, maybe -predict- is all that it takes. So a 2-level model of means would work — though for leave-out (jackknife) means, maybe it is tricky with -mixed-.

      Teacher value-added estimates are also adjusted similarly (more for smaller classes), but in my example, the procedure could be used to estimate the effect of approval by instrumenting for approval with the leave-out mean approval rate at the judge who happened to decide on the specific case. Judges with more approval are more lenient. But there is more noise in leniency for judges with fewer cases. And you want to jackknife, as you don't want to regress y on y-bar (the average effect better be one in plim).

      Comment


      • #4
        Ah. With something like an observational tool in educator evaluation, there could be cross-classification of teachers in judges/raters. In that case I would have suggested considering a many facet Rasch model, but this doesn't seem applicable to what you are doing.

        Comment


        • #5
          There is some nice discussion about shrinkage on Andew Gelman's blog, with Jesse Rothstein chiming in. Though they discuss FEs, REs, correlated FEs or a two-step procedure that is somewhere in between, I think Raj Chetty's value-added estimator on SSC (vam.ado) is not a good reference for leave-out mean estimation, nor does it use many built-in functions.

          Comment


          • #6
            László Sándor while not the same, if you're interested in VAM of educator effectiveness you may want to checkout some of the stuff put out by the Center For Education Policy Research's Strategic Data Project: https://cepr.quickbase.com/db/bhcp3hvgq?a=gennewrecord. They require registration to download the example data/scripts, but if you access the human capital toolkit they include their Stata code for estimating VAM and it has been used more widely than many of the models debated in the academic literature. http://sdp.cepr.harvard.edu/toolkit-effective-data-use has a bit more information about the project and things like that more generally.

            Comment

            Working...
            X