Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Estimating Event-Study Standard Errors Manually

    Lots of the newer difference-in-difference estimators do dynamic plotting of their effect sizes, with corresponding confidence intervals for all periods except the period before treatment (usually). Well, I want mine to do the same. Let's use it really quickly.
    Code:
    cap which fdid
    
    if _rc != 0 {
    
    net from "https://raw.githubusercontent.com/jgreathouse9/FDIDTutorial/main"
    net install fdid, replace
    
    net get fdid, replace
    }
    clear *
    u smoking, clear
    cls
    
    qui fdid cigsale, tr(treated) unitnames(state) gr2opts(scheme(sj) name(teplot, replace))
    
    mkf newframe
    cwf newframe
    svmat e(series), names(col)
    
    line te3 event, xli(0)
    This estimates DID for Prop 99's effect on tobacco consumption, using the control group of only Montana, Colorado, Nevada, and Connecticut. We see the plot the command makes, as well as the way to manually reproduce the plot using e(series), where te3 is our pointwise treatment effect of interest and eventtime is the time to event. Well... how would I compute the uncertainty/SE for each point, sort of as we do with eventdd, or any one of the newer ones? That is, I wish to produce confidence intervals for these individual treatment effects.

    My original thought was to use bootstrapping somehow to do this. Presumably I must look through the ado code of the newer estimators to really see the details, but I was wondering if there'd be a straightforward way to do this with the existing information FDID saves. How might I go about this, perhaps? Once I do that, I think I can extend a similar process to staggered adoption. Perhaps Diego Ciccia, Damian Clarke, or Jeff Wooldridge may have thoughts on this?

  • #2
    Jared: I've been teaching a method for the past few years that is based on a sequence of cross-sectional regressions. It's based on this working paper: Lee and Wooldridge (2023). The idea is simply to subtract off the pre-intervention mean from the outcome in each time period. This collapses to a cross section for each treated period. Then just regress the transformed Y on a constant and the treatment dummy and use exact (small sample) inference. I've tried this with the California smoking data, with all units and with subsets. But it only appears to be reliable when state-specific trends are also removed. That's described in the paper, too. I can send you my slides on this if you send me an email.

    I'm including this method in a current monograph on DiD, so hopefully there will be a clear reference to the small N. But it is just the simple method in Lee and Wooldridge (2023).

    I know there's a way to bootstrap with N1 = 1, but the regression is easier. It does rely on normality, though.



    Comment


    • #3
      Okay I see. What I may do then, in terms of code is reshape the dataset long and use xtreg (or whatever method one would use) under the hood. I'll email you this week for the slides then, if that's alright. Thank you so much!

      Comment


      • #4
        Jeff Wooldridge I know your email inbox must be quite busy, so I'm simply posting here that I emailed you. Thanks so much! Would be super interested in seeing your slides for this method (presumably it has Stata code?)

        Comment

        Working...
        X