I am trying to use Monte Carlo simulations to conduct a sensitivity analysis to potential measurement error in the independent variable in a regression. What I would like to do is add different amounts (x) of noise to the value of the variable and then for every amount x conduct the MC simulation. In the end, I would have a sense of how much results change when x amount of noise was added. For x I was thinking a fraction of the actual value of the independent variable. To provide an implementable working example, imagine I want to use the auto data in Stata and `reg price mpg`. Then, I want to see how sensitive results are to mismeasurement of `mpg` by adding noise to `mpg` based on the distribution of `mpg`. So, do a MC simulation when I `replace mpg = mpg + rnormal(0.1*mean(mpg),0.1*sd(mpg))` at .1 intervals all the way up to 1 (apologies for the abuse of Stata syntax). Then I could see how much noise, as a fraction of the reported mpg, would need to be added before results change. I could obviously write this out as a set of 9 (.1 to 1) different programs but I would like to use a loop to cycle through values from .1 to 1. I am not sure if the best way is to write a loop that changes the value of `x = .1(.1)1` and within the loop have the MC program or if it is better to put the loop into the program. Right now I have the following:
As I said, I'm not sure putting the loop inside the program is the right approach. I've tried to follow the advice in this post but to no avail.
Code:
sysuse auto.dta, clear reg price mpg program myreg, rclass forvalues i = .1(.1)1 { qui: sum mpg local mpg_mean = r(mean) local mpg_std = r(sd) return scalar mpgmn = `mpg_mean'*`i' return scalar mpgsd = `mpg_std'*`i' replace mpg = mpg + rnormal(`mpgmn',`mpgsd') reg price mpg } end * run MC simulations simulate _b _se, reps(100) seed(5762): myreg gen t_mpg =_b_mpg/_se_mpg gen p_mpg = 2*ttail(dfr,abs(t_mpg)) sum _b_mpg _se_mpg p_mpg
Comment