Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • placebo test for DID

    Hello,

    I'm running a DID and want to conduct 3 placebo tests: 1. to randomly shuffle treat (for e.g., 500 times) and re-run the regression, 2. randomly shuffle post, and 3. randomly shuffle both treat and post. The dependent variable for my DID is a dummy, so I use a probit model.

    My code
    Code:
    permute treat beta = _b[treat], reps(500) seed(123) saving("simulation.dta", replace): probit y treat post treat_post c1 c2 c3 c4 c5 i.industry i.year, vce (cluster firm)
    
    
    use "simulation.dta", clear
    
    #delimit ;
    histogram beta, xline(0.034, lc(black*0.5) lp(dash))
                 xtitle("Coefficient estimates", size(*0.8))
                 xlabel(-0.10(0.02)0.10, format(%4.2f) labsize(small))
                 ytitle("Frequency", size(*0.8))
                 ylabel(, nogrid format(%4.0f) labsize(small))
                 note("") caption("")
                 graphregion(fcolor(white)) ;
    #delimit cr
    
    graph export "placebo.png", width(1000) replace
    I had no problem running this code and draw histogram using reg, but when using probit, I could observe that all beta are somehow missing
    Code:
    -------------------------------------------------------------------------------
                 |                                               Monte Carlo error
                 |                                              -------------------
               T |    T(obs)       Test       c       n      p  SE(p)   [95% CI(p)]
    -------------+-----------------------------------------------------------------
            beta |  .1740513      lower       0       0      .      .      .      .
                 |                upper       0       0      .      .      .      .
                 |            two-sided                      .      .      .      .
    -------------------------------------------------------------------------------
    Notes: For lower one-sided test, c = #{T <= T(obs)} and p = p_lower = c/n.
           For upper one-sided test, c = #{T >= T(obs)} and p = p_upper = c/n.
           For two-sided test, p = 2*min(p_lower, p_upper); SE and CI approximate.
           Some permutations led to results with missing values.
    I wonder what is the problem with this code and how I can adjust if I have to use a probit model. Thanks!
Working...
X