Dear Statalists,
I hope this isn't the wrong venue for me to ask, but I am wondering about your opinions on robustness checking/testing. This isn't specifically related to Stata, so my apologies if this is improper of me to ask here.
I have been reading up on the subject, and there truly seems to be no consensus, not even here on Statalist. I am curious what you think of the topic. It seems that there is no real consensus on whether robustness testing is good (or good enough), nor how it should be done.
Most literature seems to suggest the adding and removing of regressors, to see if the baseline model is sensitive to these changes. But not everyone agrees with this.
Karlson, Holm and Breen (whom made the 'khb' Stata command) argue, that especially in logit and non-linear models, adding or removing variables isn't as innocuous as in a linear model.
Plümper and Neumayer argue that one shouldn't base a robustness check on only the statistically significant findings. They also argue that the adding or removing of variables doesn't truly lead you to more valid inferences - because "non of the models are correctly specified" and "there is no guarantee that you employed all important plausible alternative model specifications in robustness testing".
Lu and White express concerns with using the stata commands 'checkrob' or 'rcheck' because these commands don't account for model restrictions.
Young and Kroeger say one should be testing alternative plausible models up against eachother (example: probit vs. logit).
I wonder if any of you, with your statistical experience have any opinions to share on the subject.
I hope this isn't the wrong venue for me to ask, but I am wondering about your opinions on robustness checking/testing. This isn't specifically related to Stata, so my apologies if this is improper of me to ask here.
I have been reading up on the subject, and there truly seems to be no consensus, not even here on Statalist. I am curious what you think of the topic. It seems that there is no real consensus on whether robustness testing is good (or good enough), nor how it should be done.
Most literature seems to suggest the adding and removing of regressors, to see if the baseline model is sensitive to these changes. But not everyone agrees with this.
Karlson, Holm and Breen (whom made the 'khb' Stata command) argue, that especially in logit and non-linear models, adding or removing variables isn't as innocuous as in a linear model.
Plümper and Neumayer argue that one shouldn't base a robustness check on only the statistically significant findings. They also argue that the adding or removing of variables doesn't truly lead you to more valid inferences - because "non of the models are correctly specified" and "there is no guarantee that you employed all important plausible alternative model specifications in robustness testing".
Lu and White express concerns with using the stata commands 'checkrob' or 'rcheck' because these commands don't account for model restrictions.
Young and Kroeger say one should be testing alternative plausible models up against eachother (example: probit vs. logit).
I wonder if any of you, with your statistical experience have any opinions to share on the subject.
Comment