Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Random implementation test/ falsification check for Diff-in-Diff with staggered adoption

    Dear all,

    I employ difference-in-differences estimation to analyze the effect of a legislative policy (with staggered adoption) on self-employment in US counties. The analysis includes the years from 2000 until 2016. Some counties remain untreated for the entire period.

    Basic code: xtreg self-employment treatment i.year, fe cluster(county_id)

    First results indicate a significant, negative effect on self-employment. However, I want to ensure that the findings are not idiosyncratic (e.g., due to serial correlation or spurious relationships). To ensure robustness, I would like to run a random implementation test/ falsification check where the treatment indicator (1 = treatment; 0 = no treatment) is randomly assigned within the county-year panel. The effect of the random entry of the treatment indicator should be estimated with the difference-in-differences model. This procedure should be replicated 1000 times.

    Unfortunately, as I am new to STATA and Diff-in-Diff, I wasn't able to implement such a random implementation test/falsification check with the specifications desribed above. It would be a great help if you could provide me a solution/code for such a random implementation test/falsification check!

    The data is structured like this (exemplary values):

    COUNTY-ID YEAR SELF-EMPLOYMENT TREATMENT
    01001 2000 7500 0
    01001 2001 7500 0
    01001 2002 7500 0
    01001 2003 7700 0
    01001 2004 7700 0
    01001 2005 7700 0
    01001 2006 7200 0
    01001 2007 7100 0
    01001 2008 7100 0
    01001 2009 7200 0
    01001 2010 7300 0
    01001 2011 7700 1
    01001 2012 7800 1
    01001 2013 8200 1
    01001 2014 8200 1
    01001 2015 8400 1
    01001 2016 8200 1
    01003 2000 2400 0
    01003 2001 2300 0
    01003 2002 2200 0
    01003 2003 2300 0
    01003 2004 2400 0
    01003 2005 2300 0
    01003 2006 2400 0
    01003 2007 2400 0
    01003 2008 2500 1
    01003 2009 2500 1
    01003 2010 2400 1
    01003 2011 2600 1
    01003 2012 2600 1
    01003 2013 2600 1
    01003 2014 2700 1
    01003 2015 2800 1
    01003 2016 2800 1

    Thank you!

  • #2
    In the future, please use the
    Code:
    ssc inst dataex, replace
    command for data generating purposes, since it allows us to just put the example smack dab in our do files.

    I have two points to make. Firstly, if you'd like to do randomization placebos, is this what you're looking for?

    Also, I don't advise you use the two way fixed effects approach for staggered implementation, for reasons cited here. I know, I know, econometricians workshop before this..... but there are better ways of doing staggered event DD studies. I know, I used them in my masters thesis this last May. Both have implementations in Stata.

    Comment


    • #3
      Dear Jared,

      thanks for your quick response. Yes, I'll use dataex in the future, thanks for the hint!

      Yes, I actually tried to do randomization placebos with ritest, but unfortunately wasn't able to implement it correctly as I'm unsure how the code would look like.

      Would it be something like this? If so, how to implement the 1000 repititions?

      ritest treatment _b[c.treatment#c.post_treatment], cluster(county_id): reg self-employment c.treatment##c.post_treatment

      Thank you!

      Comment


      • #4
        Max Schmidt I've never used this command before. Put the code into Stata and show me the exact error codes that're returned.

        Did you check the help file? I know the Stata help files might as well be written in Russian sometimes, but those will tell you what you need. So, please show the exact code you ran using code delimiters, as well as the errors returned.

        Typically, the help files do give SOME sort of example of code. Indeed, there's even a Stata Journal article for this command.

        Comment


        • #5
          Thanks Jared. Yes I checked the help files and also the Stata Journal article, as well as the developer's Github page. But I'm concerned that, given my inexperience with Stata and Diff-in-Diff, I'm still missing something (probably a lot)...

          Here's the code I used (after creating the post_treatment variable):
          Code:
          ritest treatment _b[c.treatment#c.post_treatment], reps(1000) cluster(county_id): reg self_employment c.treatment##c.post_treatment
          Stata returns the following error code:

          treatment does not seem to be constant within clusters
          r(9999);

          Moreover, I receive the following note:

          note: c.treatment#c.post_treatment omitted because of collinearity.

          Thank you!

          Comment


          • #6
            Max Schmidt the correct syntax, based off the article, seems to be
            Code:
            ritest treatment _b[treatment]
            You have both the interaction term specified in the command, when it looks like you only need the indicator for the treatment. Also, it looks like you are specifying your indicator vars incorrectly in the regression analysis. You have them specified as continuous variables
            Code:
            c.treatment#c.post_treatment
            When these in fact are indicator/dummy variables. The way to do this, assuming that your post policy variable is just 1 and 0 before the intervention, would be
            Code:
            i.treatment##i.post_treatment
            in the regression equation.

            Comment


            • #7
              Max Schmidt one book, which happens to be open source, that I would strongly recommend as reference and intro to DD, is this book by Cunningham. He goes into lots of detail about placebo tests, and if I recall correctly he gives code to implement such tests. If you're just getting into DD, likely for an econometrics course, this is a good text to consult.

              Comment


              • #8
                Thanks for all of your helpful advice, Jared. Actually, I came across this book just last week, I'll definitely have a deeper look into it! I'll try the fixed code tomorrow and will let you know if it worked. Thank you again very much for your helpful comments and time!

                Comment

                Working...
                X