Prompted by commentary from James MacKinnon, I completely rethought and reworked how boottest performs the wild bootstrap after IV/GMM--more precisely, how it performs the Wild Restricted Efficient bootstrap of Davidson & MacKinnon (2010). On my benchmark, the Steven Levitt criminology example at the end of the "Fast and Wild" paper about boottest, it is now 233 times faster! This change is distinct from the ~10X speed-up after OLS

The main things you need to know if you use boottest, or are boottest-curious:

What I realized is that by partialling out a control set that consists of, say, 100 exogenous dummy variables, the dimensionality of the estimator can be reduced during bootstrap replication. Then the matrix to be inverted and multiplied each time, Z'MZ, could be 1x1 instead of 100x100. The cost is that on each replication, the partialled-out variables must also be partialled out of the bootstrap versions of the endogenous variables. But this is a linear operation, of the the sort that occurs when wild-bootstrapping OLS, which I already knew how to speed up.

The surgery was radical, so please let me know if you run into problems.

--David

The main things you need to know if you use boottest, or are boottest-curious:

- One new change can slightly affect results, specifically the bounds for confidence sets--and not just after IV/GMM. I switched to a different search algorithm for pinpointing CI bounds (Chandrupatla 1997). In the bootstrap, the p value as a function of the trial parameter value is a step function when viewed at high resolution: it can only take values n/B where n is an integer and B is the number of bootstrap replications. As a result, when searching for the cross-over point for, say, p = 0.05, values in a small range are equally valid. The new algorithm happens to settle on slightly different points. These discrepancies disappear as you increase the number of replications.
- The new version is available on SSC with

Code:ssc install boottest, replace

- The release history back to 2017 is preserved on Github. To install an old version do

Code:net install boottest, replace from(https://raw.github.com/droodman/boottest/vX.Y.Z)

What I realized is that by partialling out a control set that consists of, say, 100 exogenous dummy variables, the dimensionality of the estimator can be reduced during bootstrap replication. Then the matrix to be inverted and multiplied each time, Z'MZ, could be 1x1 instead of 100x100. The cost is that on each replication, the partialled-out variables must also be partialled out of the bootstrap versions of the endogenous variables. But this is a linear operation, of the the sort that occurs when wild-bootstrapping OLS, which I already knew how to speed up.

The surgery was radical, so please let me know if you run into problems.

--David

## Comment