Dear all,
I have run a "Bayesian linear regression", this I have already done successfully by: by Stata version 14 (SE):
Commands:
************************************************** **********************************************
bayesmh (y1 x1 x2), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal.dta)
estimates store model1
bayesmh (y1 x1 x2 x3), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal2.dta)
estimates store model2
bayesmh (y1 x1 x2 x3 x4), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal3.dta)
estimates store model3
*Final comparison concerning the "significance level" --> better "Bayesian Factor": bayesstats ic model3 model2 model1, basemodel(model1)
************************************************** ********************************
End sequence of commands:
QUESTION:
Based on these squence of commands (above) my question now is:
"Change the Hypotheses for the ß-Parameters to one-sided Hypotheses":
QUESTION more precise:
How could I "re-define" the underlying H0-Hypotheses (here Bayesian linear multiple regression with H0: ß1 = 0, ß2 = 0 etc.), change H0 in order to get the changed H0 estimated by Bayesian linear regression with H0 like ß1 >c and ß2 > c (one-sided now), is this possible in Stata 14, if how?
I assume you just could use and leave the same sequence of commands as above suggested
but when it comes to calculate the BAYESIAN FACTOR (which compares the priors -p-values) you should add other appropiate commands?!
Additonal remark: By an "OLS estimated" multiple linear additive regression this is possible AFTER having run a regression,
you can do it afterwards by e.g.:
OLS-approach for comparison: "changing the HO to a one-sided Hypothesis for each ßi":
************************************************** ************************************************** **
regress y1 x1 x2
*First of all for ß1 of x1 change the HO to a one-sided Hypothesis:
test _b[`1']= c1
local sign_wgt_pos = sign(_b[`1']-c1) // local macro
display "H0: b>=c1 p-value = " 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))
scalar pvaluepositive = 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))
************************************************** ************************************************** ****
"After the regression" because the "estimated parameter" stays the same, we only change the Null Hypothesis to a one-sided Hypothesis for each ßi, so we just get a new p-value for the same estimated parameter. But how does it work "changing H0 to one-sided" with this Bayesian approach?
(The Software R has a package for this, buth how is it possible in Stata 14?)
I really would appreciate an answer or helpful comments etc.!
Best, Niklas
I have run a "Bayesian linear regression", this I have already done successfully by: by Stata version 14 (SE):
Commands:
************************************************** **********************************************
bayesmh (y1 x1 x2), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal.dta)
estimates store model1
bayesmh (y1 x1 x2 x3), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal2.dta)
estimates store model2
bayesmh (y1 x1 x2 x3 x4), likelihood(mvnormal({S,m})) ///
prior({y1:}, normal(0,100)) ///
prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal3.dta)
estimates store model3
*Final comparison concerning the "significance level" --> better "Bayesian Factor": bayesstats ic model3 model2 model1, basemodel(model1)
************************************************** ********************************
End sequence of commands:
QUESTION:
Based on these squence of commands (above) my question now is:
"Change the Hypotheses for the ß-Parameters to one-sided Hypotheses":
QUESTION more precise:
How could I "re-define" the underlying H0-Hypotheses (here Bayesian linear multiple regression with H0: ß1 = 0, ß2 = 0 etc.), change H0 in order to get the changed H0 estimated by Bayesian linear regression with H0 like ß1 >c and ß2 > c (one-sided now), is this possible in Stata 14, if how?
I assume you just could use and leave the same sequence of commands as above suggested
but when it comes to calculate the BAYESIAN FACTOR (which compares the priors -p-values) you should add other appropiate commands?!
Additonal remark: By an "OLS estimated" multiple linear additive regression this is possible AFTER having run a regression,
you can do it afterwards by e.g.:
OLS-approach for comparison: "changing the HO to a one-sided Hypothesis for each ßi":
************************************************** ************************************************** **
regress y1 x1 x2
*First of all for ß1 of x1 change the HO to a one-sided Hypothesis:
test _b[`1']= c1
local sign_wgt_pos = sign(_b[`1']-c1) // local macro
display "H0: b>=c1 p-value = " 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))
scalar pvaluepositive = 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))
************************************************** ************************************************** ****
"After the regression" because the "estimated parameter" stays the same, we only change the Null Hypothesis to a one-sided Hypothesis for each ßi, so we just get a new p-value for the same estimated parameter. But how does it work "changing H0 to one-sided" with this Bayesian approach?
(The Software R has a package for this, buth how is it possible in Stata 14?)
I really would appreciate an answer or helpful comments etc.!
Best, Niklas
Comment