Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bayesian Linear Regresion

    Dear all,
    I have run a "Bayesian linear regression", this I have already done successfully by: by Stata version 14 (SE):

    Commands:
    ************************************************** **********************************************
    bayesmh (y1 x1 x2), likelihood(mvnormal({S,m})) ///
    prior({y1:}, normal(0,100)) ///
    prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal.dta)
    estimates store model1

    bayesmh (y1 x1 x2 x3), likelihood(mvnormal({S,m})) ///
    prior({y1:}, normal(0,100)) ///
    prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal2.dta)
    estimates store model2

    bayesmh (y1 x1 x2 x3 x4), likelihood(mvnormal({S,m})) ///
    prior({y1:}, normal(0,100)) ///
    prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal3.dta)
    estimates store model3

    *Final comparison concerning the "significance level" --> better "Bayesian Factor": bayesstats ic model3 model2 model1, basemodel(model1)

    ************************************************** ********************************
    End sequence of commands:

    QUESTION:
    Based on these squence of commands (above) my question now is:
    "Change the Hypotheses for the ß-Parameters to one-sided Hypotheses":

    QUESTION more precise:
    How could I "re-define" the underlying H0-Hypotheses (here Bayesian linear multiple regression with H0: ß1 = 0, ß2 = 0 etc.), change H0 in order to get the changed H0 estimated by Bayesian linear regression with H0 like ß1 >c and ß2 > c (one-sided now), is this possible in Stata 14, if how?
    I assume you just could use and leave the same sequence of commands as above suggested
    but when it comes to calculate the BAYESIAN FACTOR (which compares the priors -p-values) you should add other appropiate commands?!

    Additonal remark: By an "OLS estimated" multiple linear additive regression this is possible AFTER having run a regression,
    you can do it afterwards by e.g.:

    OLS-approach for comparison: "changing the HO to a one-sided Hypothesis for each ßi":
    ************************************************** ************************************************** **
    regress y1 x1 x2

    *First of all for ß1 of x1 change the HO to a one-sided Hypothesis:

    test _b[`1']= c1

    local sign_wgt_pos = sign(_b[`1']-c1) // local macro
    display "H0: b>=c1 p-value = " 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))

    scalar pvaluepositive = 1-ttail(r(df_r),`sign_wgt_pos'*sqrt(r(F)))
    ************************************************** ************************************************** ****

    "After the regression" because the "estimated parameter" stays the same, we only change the Null Hypothesis to a one-sided Hypothesis for each ßi, so we just get a new p-value for the same estimated parameter. But how does it work "changing H0 to one-sided" with this Bayesian approach?
    (The Software R has a package for this, buth how is it possible in Stata 14?)

    I really would appreciate an answer or helpful comments etc.!

    Best, Niklas

  • #2
    I'm not a Bayesian. But, first of all, I don't think they say p-values. A p-value in frequentist stats is the probability that you see a certain beta coefficient or greater given the null hypothesis that the beta was 0. (I hope I am not misstating this.) I'm not even sure that there's a null hypothesis in Bayesian analysis.

    That said, if you're testing the probability that a parameter is greater than some value c, don't you want -bayestest interval-, which tests the hypothesis that the parameter is within some interval? That interval can be closed on one end and unbounded on the other, or closed on both.

    In your example code (please enclose it in code brackets for readability, see my signature), I think Stata would probably interpret "c1" as a variable, and if c1 isn't present, then it will search for the corresponding scalar if defined. I don't know how you are storing whatever values you think are critical. I think you would type something like:

    Code:
    bayesmh (y1 x1 x2 x3 x4), likelihood(mvnormal({S,m})) ///
    prior({y1:}, normal(0,100)) ///
    prior({S,m}, iwishart(1,100,I(1))) saving(C:\Users\wptx015\Desktop\normal3.dta)
    estimates store model3
    bayestest interval {y1:x1}, lower(1)
    for a test of the hypothesis that the beta for x1 is greater than 1 exclusive (note the -inclusive- option; exclusive is the default). Specifically, you get the posterior probability (I hope I'm using the term right) that the beta for x1 is greater than 1 exclusive. This follows the syntax in the Stata 14 post about the then-new Bayesian commands. You should type -help bayestest- within your command window. You can typically access a PDF with a fuller description than the one that the help will immediately pull up in Stata; there should be an obvious link in the help file.
    Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

    When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

    Comment


    • #3
      Hello Weiwen Ng,

      many thanks for your answer,
      I have worked through the whole night and now I have come to the correct conclusion finally; yes, that is correct - in the "bayestest interval-PDF" there are appropiate commands which lead me to the final correct solution and further information from a Seminar I have attended, but many thanks, it is goes more or less in line what you have suggested.

      By the way, in Bayesian regressions theory and in praxis there is a H0-Null-Hypothesis and an Alternative Hypothesis, it refers to an "interval" in which the parameter should fall (H0), if not
      you would have the Ha-Alternative-Hypothesis being outside the interval you can define by yourself for (1.)continous variables,
      for (2.) discrete variables you even could define it like H0: ß = 0 and not equal 0 for H1,
      (3.) looking at the whole model change it could be explained by different "priors" which lead us to different changes in "posterior" distributions etc. .

      Many thanks for your advice how to use the correct "code" here, I´m a new member here so I have to get accustomed to use the "code delimiters" etc., thanks! :-)

      Niklas

      Comment


      • #4
        Dear Stata-Users,
        apart from the Machine Learning-approach "ssc install lassopack" which gives different LASSO-estimation methods
        --> does a "Random Forest"-Machine Learning or "Neural Network"- ado-File exist in Stata?

        Comment


        • #5
          there was a discussion in Stata news last year - see https://www.stata.com/stata-news/news33-4/users-corner/

          note that there is, in addition, a neural network command -brain-; use -search- to find and download

          Comment

          Working...
          X