Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • staggered DID estimation: problem with confidence intervals

    I am currently estimating dynamic treatment effects using the csdid command in Stata, specifically applying the dripw method along with covariates treated non-parametrically:

    csdid renewable_share, ivar(country_id) time(year) gvar(treat_year) covariates(log_GDP_pc log_energy_pc HDD) notyet method(dripw) wboot

    I’ve noticed that, in some cases, the confidence intervals obtained via wboot contain the value 0, even when the p-values suggest strong statistical significance. However, when I use vce(robust) instead of wboot, results are coherent.

    What would you recommend??

    Thank you!

  • #2
    The confidence intervals (CIs) and p-values generated by wild bootstrap are based on the empirical distribution of the test statistic under the null hypothesis, not on the standard t- or normal distribution. With small samples or skewed data, the assumptions of symmetry that often make CIs and p-values perfectly align in parametric statistics do not necessarily hold. Therefore, seeing a CI include 0 while the p-value is less than 0.05 is possible (and is often observed) and reflects the characteristics of the bootstrap distribution itself.

    Comment


    • #3
      ok, it makes sense... and thank you! What if I choose to use vce(robust) only for the visual plots and I explain the procedure clearly in my thesis, while keeping wboot for the ATT estimation?? Otherwise, the graphs of my DID appear to show no effects in the post treatment, even though the coefficients are statistically significant.. would this be an acceptable procedure?

      Thank you!!

      Comment


      • #4
        There must be a valid reason for using wild bootstrap in the first place—typically, a small number of clusters or concerns about inference reliability. If that's the case, using a different method like -vce(robust)- just for visualization creates a mismatch between your inference and your presentation. Personally, I would consider omitting the graphs altogether. While visualizations can be helpful, they’re not essential in this context. Reviewers who see that you're using wild bootstrap are unlikely to expect or demand parametric-style plots.

        Comment


        • #5
          I do have a doubt: using wild bootstrap do I interpret statistical significance only through CIs? And between t-statistic and CIs which one should I choose? Because using staggered DID only gives back t-statistic and CIs

          Comment


          • #6
            Originally posted by Claudia Armenise View Post
            using wild bootstrap do I interpret statistical significance only through CIs? And between t-statistic and CIs which one should I choose? Because using staggered DID only gives back t-statistic and CIs
            The bootstrap t-statistic by itself is not interpretable without reference to the empirical distribution generated by the bootstrap (you cannot use standard normal or t-distributions). Therefore, statistical significance should be determined using the bootstrap confidence intervals, or alternatively, by computing bootstrap-based p-values based on where the observed statistic lies in the empirical distribution.

            Comment


            • #7
              csdid renewable_share, ivar(country_id) time(year) gvar(treat_year) covariates(log_GDP_pc log_energy_pc HDD) notyet method(dripw) wboot(reps(1999) wtype(rademacher)) pointwise

              Okay, so by using this command, confidence intervals are narrower and statistical significant (they exclude value = zero) after the treatment... is it correct to use pointwise and wtype(rademacher)) in this case?? Should I propose this strategy to my professor or am I only getting around the problem without solving it?

              Thank you again!
              Last edited by Claudia Armenise; 04 Aug 2025, 14:35.

              Comment


              • #8
                FernandoRios is better suited to answer that question, as I am not deeply familiar with the command's options.

                Comment


                • #9
                  Thank you, I'll wait for his opinion, then!

                  Thank you again!

                  Comment


                  • #10
                    As always it depends
                    pointwise stderr may not be valid jointly
                    in otherwords there is a higher chamce of false positives

                    Comment


                    • #11
                      As a check on plausibility, you can use jwdid with the never option. This produces standard clustered standard errors after flexible regression.

                      Comment

                      Working...
                      X