Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • JWDID and staggered adoption in DID designs: clarification

    Dear Statalist's users,
    in my research work I'm studying the impact of a treatment on an outcome: the treatment adoption is staggered. I have a couple of questions:
    1. Does jwdid command in Stata weight ATTs in order to address the issue of possible heterogeneous treatment in already treated cohorts or does it drop them and remove them from the control sample?
    2. I would like to test the possible presence of classical bias in twfe estimation of models with staggered adoption. I generated an estimate (estat simple) from the regression performed with jwdid, and saved the result. I subsequently reconducted the analysis ceteris paribus, but adopting the hettype(twfe) option, then saved this second estimate (again with estat simple). I then performed a hypothesis test, where the null hypothesis is that the difference between these two estimates is zero. Did I proceed in a rigorous manner? If not, any suggestions are welcome.
    I apologize in advance if my questions may seem naive, and thank you for your support.
    Last edited by Simone Robbiano; 21 Jul 2025, 06:22.

  • #2
    Hi Simone
    1) You may want to read Prof Wooldridge paper to fully answer your questions. The bottom line is...by allowing for full interactions, you avoid the issue of differential treatment timing
    2) The only other way is to run both regressions by hand, then obtain the aggregtes. But what you are doing is the simplest best second case.

    Comment


    • #3
      Thank you very much! Your hints have been very useful!

      Comment


      • #4
        Simone: The only problem with your procedure is that the standard error is unlikely to be correct because the estimators are correlated. You can do this "by hand" by including all of the interactions, of the form dg*fs, where dg are cohort dummies and fs are the time period dummies. Then you can test whether these are all the same.

        If you go to the pinned tweet on my Twitter (X) account you fan find an example.

        The flexible estimation avoids bad comparisons, and Fernando's masterly jwdid command does the proper weighting so you can obtain the effects by weighted exposure time -- as it appears you know. Once you verify you can reproduce the jwdid results, you can do the test as a test of equality of all coefficients.

        I would think that, it most applications, an eyeballing would be enough. In my experience -- not widespread, but I've seen more than a handful -- the constant effect model is not so bad. But, of course, you can't know until you try.

        Comment

        Working...
        X