Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to read the suest command results

    Hi everyone,
    kindly I have a doubt about reading the results of suest command. I mean after running suest command we have to use test command to calculate the difference between the two models. The result for example comes out as follows:
    HTML Code:
       chi2(  1) =    4.17          Prob > chi2     0.0004
    so what is the interpretation of that?
    does it mean that there is a significant difference between the two models’ estimates ?
    or the opposite?
    kindly give me some explanation here .

  • #2
    You didn't get a quick answer. Having posted so many times, you should know that following the FAQ on asking questions will help you get a useful answer – provide Stata code in code delimiters, readable Stata output, and sample data using dataex.

    It is very easy for you to figure out whether this indicates a significant difference or not. Simply create two artificial regressions where the parameters vary, and two where the parameters are almost identical. Run the tests.

    Comment


    • #3
      Thank you,
      actually I have asked a question with some details but the last question ( this one) no response so I asked again.
      thanks

      Comment


      • #4
        Hi Alkebsee,

        I think you posted only the critical values. The -test- command tests what you specified, for example, if you are testing [model1_mean]x1 = [model2_mean]x1 the command will yield a null hypothesis like: [model1_mean]x1 - [model2_mean]x1 = 0

        So, the null hypothesis is that the difference between both coefficients is statistically equals to zero. On your first post you showed that you got a prob chi2 = 0.0004, so you are rejecting the null hypothesis that both coefficients are the same and the difference between them is different from zero.

        On a previous topic (https://www.statalist.org/forums/for...o-coefficients) I answered this question for you, but my interpretation at the end is wrong, the correct is the exact opposite of what I said there. Sorry about that.

        Regards,

        Comment


        • #5
          Originally posted by Fernando Martins View Post
          Hi Alkebsee,

          I think you posted only the critical values. The -test- command tests what you specified, for example, if you are testing [model1_mean]x1 = [model2_mean]x1 the command will yield a null hypothesis like: [model1_mean]x1 - [model2_mean]x1 = 0

          So, the null hypothesis is that the difference between both coefficients is statistically equals to zero. On your first post you showed that you got a prob chi2 = 0.0004, so you are rejecting the null hypothesis that both coefficients are the same and the difference between them is different from zero.

          On a previous topic (https://www.statalist.org/forums/for...o-coefficients) I answered this question for you, but my interpretation at the end is wrong, the correct is the exact opposite of what I said there. Sorry about that.

          Regards,
          Thank you so much, I got it very clear.
          is there another test which uses to find Ttest for comparing two models. Suest command will give us a wald value not Ttest.
          one more thing
          as you from previous post, I am comparing between two models but the independent variable in both models is different ( i.e., CEOpay in model 1, and CFOpay in model 2). Is that also ok ?
          ​​​​​​​@Fernando Martin

          Comment


          • #6
            Hi Alkebsee,

            For this purpose the t-test or the Wald test will lead to the same answer, both are asymptotically equivalent. The T-test would only be useful if you have a small sample. For the command itself, I'm not sure, since I've never had any problem reporting Wald statistics for such purpose.

            In regards to the second question, if your comparisons make sense or not, this is something that you need to answer by yourself. There is no problem comparing coefficients of the same variable across different model specifications, or even comparing coefficients from different variables, the point is, the comparison should make sense towards what you want to explain. If I'm not mistaken, you shared a model at another topic comparing the pays from CEOs and CFOs. There you can find both kinds of comparisons. They are comparing if the coefficient from the same variable is the same across two models (CEO and CFO), but they are also comparing if SMISS and BMISS coefficients are the same within the same model (check the table footnote).

            Comment


            • #7
              Originally posted by Fernando Martins View Post
              Hi Alkebsee,

              For this purpose the t-test or the Wald test will lead to the same answer, both are asymptotically equivalent. The T-test would only be useful if you have a small sample. For the command itself, I'm not sure, since I've never had any problem reporting Wald statistics for such purpose.
              first of all thank you so much.
              secondly: given you never had issues when reporting Wald statistics, I will not have any . This’s one issue already solved thank you so much.

              In regards to the second question, if your comparisons make sense or not, this is something that you need to answer by yourself. There is no problem comparing coefficients of the same variable across different model specifications, or even comparing coefficients from different variables, the point is, the comparison should make sense towards what you want to explain. If I'm not mistaken, you shared a model at another topic comparing the pays from CEOs and CFOs. There you can find both kinds of comparisons. They are comparing if the coefficient from the same variable is the same across two models (CEO and CFO), but they are also comparing if SMISS and BMISS coefficients are the same within the same model (check the table footnote).
              Once again thanks, I got it.
              .
              Thanks for the good explanation

              Comment

              Working...
              X