Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Wald test: why coef1+coef2=0 and coef2=0 but coeff1 does not equal zero ?

    Dear All
    I estimate the following model:
    xtreg crmv3WL bvW abn2W neg_siW op2WA l.bvW i.yr ,fe

    The output (year fixed effects are suppressed) is:
    ----------------------------
    (1)
    crmv3WL
    ----------------------------
    bvW 0.219***
    (30.41)

    abn2W 0.693***
    (44.32)

    neg_siW -0.0342
    (-0.60)

    op2WA -0.410
    (-0.90)

    L.bvW 0.0496***
    (6.73)


    I am interested in testing weather the (coeff on abn2W + coeff. on op2WA = 0)

    I run :
    test abn2W+op2WA=0

    ( 1) abn2W + op2WA = 0

    F( 1, 38358) = 0.38
    Prob > F = 0.5350

    I conclude that the null can not be rejected and in fact coeff abn2W+coeff op2WA=0


    I run:
    . test op2WA=0

    ( 1) op2WA = 0

    F( 1, 38358) = 0.80
    Prob > F = 0.3701

    I conclude that the null can not be rejected and in fact and coeff op2WA=0


    I run :
    . test abn2W=0

    ( 1) abn2W = 0

    F( 1, 38358) = 1964.34
    Prob > F = 0.0000
    this means that the null is rejected and coeff abn2W is not equal to 0

    Now I am so confused, how come that abn2W+op2WA=0 and op2WA=0 and in last test abn2W is not equal to zero ???

    I don't know if that is the effect of working till this time of the night or there has to be something wrong that I didn't realize ...........................................
    I will really appreciate if anyone can give me some guidance !!


    Thanks

  • #2
    The coefficient associated with abn2W is positive and the coefficient associated with op2WA is negative, so abn2W+op2WA is much closer too zero than either abn2W or op2WA.

    Comment


    • #3
      Ahmed,

      Those results of those three hypothesis tests are not incompatible. Each of the null hypotheses sets a coefficient or the sum of two coefficients to 0, but the result of each test is not that the coefficient or sum equals 0. It is that the coefficient or sum is not significantly different from 0 (at, say, the .05 level). This is one of the reasons for testing significance, rather than hypotheses, and for looking at the corresponding confidence intervals. Usually one knows without testing that a null hypothesis that sets a parameter to 0 is false; one would not expect the parameter to be exactly 0. The question, then, is whether the difference between the parameter and 0 is large enough to be statistically significant.

      Comment


      • #4
        Hi,
        This has been so confusing for me..I give another example:
        xtreg crmv3WL bvW abncoreW neg_siW op2W l.bvW i.yr ,fe

        crmv3WL Coef. Std. Err. t P>t [95% Conf. Interval]

        bvW .2607397 .007188 36.27 0.000 .246651 .2748285
        abncoreW 1.075023 .0194248 55.34 0.000 1.03695 1.113096
        neg_siW .6609034 .0512279 12.90 0.000 .5604953 .7613115
        op2W .343954 .4508849 0.76 0.446 -.539792 1.2277

        bvW
        L1. .046361 .0072323 6.41 0.000 .0321855 .0605364

        . test op2W=0

        ( 1) op2W = 0

        F( 1, 38358) = 0.58
        Prob > F = 0.4456

        This is convincing, yes the coeff. on op2W is insignificantly different from zero. I can also see that from the t test also.

        Now,
        . test abncoreW=op2W

        ( 1) abncoreW - op2W = 0

        F( 1, 38358) = 2.63
        Prob > F = 0.1049

        This shows that both coeffecients are not statistically different from one another. How come ? op2W is almost zero and abncoreW is statistically different from zero in the regression results...

        More confusing
        . test abncoreW=neg_siW

        ( 1) abncoreW - neg_siW = 0

        F( 1, 38358) = 51.85
        Prob > F = 0.0000

        This means that they are statistically different from one another.
        How come that abncoreW and neg_siW are different from one another and abncoreW that is highly significant is statistically INDIFFERENT from op2W that is almost zero ?????????
        This gets me crazy!


        I really appreciate your help! Is there something wrong with the "test"command in stata ? as you see I run it after a fixed effects model ? what does that mean ? I really have no interpretation!!

        Comment


        • #5
          I don't know what the point of all this significance testing is, but they are all consistent with the regression results that you have.

          Code:
          xtreg crmv3WL bvW abncoreW neg_siW op2W l.bvW i.yr ,fe
          
          crmv3WL       Coef.   Std. Err.  t     P>t    [95% Conf. Interval]
          
          bvW         .2607397 .007188   36.27  0.000    .246651    .2748285
          abncoreW   1.075023  .0194248  55.34  0.000   1.03695    1.113096
          neg_siW     .6609034 .0512279  12.90  0.000    .5604953   .7613115
          op2W        .343954  .4508849   0.76  0.446   -.539792   1.2277
          
          bvW
          L1.         .046361  .0072323   6.41  0.000    .0321855   .0605364
          The coefficients for bvW, abncoreW, and neg_siW are all estimated quite precisely; their standard errors are small and so their 95% CIs are all narrow. The coefficient for op2W, on the other hand has, in comparison, a huge standard error and its 95%CI is very wide.

          The coefficients of bvW, abncoreW, and neg_siW are all inside the 95%CI of op2W.

          Comment


          • #6
            As a sidelight, note how much easier Kieran's results are to read than the earlier presentation of them. People should learn how to use the advanced editor; in particular, the code function. This, to me, is one of the bigger advantages of the new forum.

            Turning back to the substance of the problem -- there is nothing that says significance tests have to be logically consistent. They are probabilistic statements, not absolute statements of true/false. For example, suppose one coefficient equals 1 with a standard error of .1, while another equals 1 with a standard error of 5. The first will be highly significant while the other won't be. Nonetheless, you won't reject the hypothesis that the coefficients are equal because, after all, they are the exact same number.

            An error you will sometimes see: suppose you have a sample of 100 blacks and a sample of 1,000 whites. People may make statements like X has an effect on whites but not blacks. But, the actual coefficients may be quite similar for the two groups and not significantly differ. Differences in statistical significance across groups may reflect large differences in sample size. So, you have to be careful about making claims that an effect is important for one group but unimportant for the other when you are trying to discuss differences across groups.
            -------------------------------------------
            Richard Williams, Notre Dame Dept of Sociology
            StataNow Version: 19.5 MP (2 processor)

            EMAIL: [email protected]
            WWW: https://www3.nd.edu/~rwilliam

            Comment


            • #7
              Thanks
              Richard I start to get the point. But does that mean the Wald test is more orinted to the value of the coefficients rather than their individual statistical significance? From my reading to your post you seem to focus more on the value of the coefficient.
              Since I am interested in the coefficient attached to op2W. Theory suggest that if the coefficient on op2WA =0 it is forecasting irrelevant. When I want to test this theory, let's say that I get a coeffecient of 0.6 on this variable and it is t stat is low that the variable is statistically insignificant....Suppose now I use test command in Stata to report F test (Wald test) and I can not reject the null that the coeff is equal to zero, which happens sometimes, and from my reading to your post, it is because the test focuses also on the value...what do I conclude at the end; variable is forecasting irelevant based on the t stat or relevant based on the Wald test ??

              This might be basic questions but I am really confused...I also do not know how to record the output in an organized manner like you Kieran?

              Best
              Ahmed

              Comment


              • #8
                Ahmed: The FAQ gives a hint on how to format code. See section 12 in http://www.statalist.org/forums/help
                Last edited by Nick Cox; 26 May 2014, 18:38.

                Comment


                • #9
                  Thanks Nick, I will read it again to find out how to write the code properly in the forum.
                  I hope I get some answers on my inquiry regarding a conclusion of relevance or irrelevance of the variable as predicted by theory.

                  Comment


                  • #10
                    I think your question has already been answered more than once and about as clearly as it can be, so I am very puzzled at implications to the contrary. How you might to use significance tests in discussions bearing on forecasting, theoretical predictions, etc. seem different issues to me.

                    Comment


                    • #11
                      Ahmed, just because the t value for op2WA is small does not mean that the effect of op2WA is 0. It just means we can't reasonably rule out the possibility that it is 0. The confidence interval suggests that the coefficient could be as low as -.54 or as high as +1.23. To confirm this, run commands like

                      test op2W=0
                      test op2W=-.53
                      test op2W=1.22


                      I am not sure if I understand your point about Wald tests. If you test a single coefficient, then the Wald test and T test should give you equivalent results (at least I think they should; maybe there are some statistical techniques where they don't). If you have counter-examples I'd be interested in seeing them.

                      Here are examples of what I am talking about.

                      Code:
                      . webuse nlswork, clear
                      (National Longitudinal Survey.  Young Women 14-26 years of age in 1968)
                      
                      . xtreg ln_w age, fe
                      
                      Fixed-effects (within) regression               Number of obs      =     28510
                      Group variable: idcode                          Number of groups   =      4710
                      
                      R-sq:  within  = 0.1026                         Obs per group: min =         1
                             between = 0.0877                                        avg =       6.1
                             overall = 0.0774                                        max =        15
                      
                                                                      F(1,23799)         =   2720.20
                      corr(u_i, Xb)  = 0.0314                         Prob > F           =    0.0000
                      
                      ------------------------------------------------------------------------------
                           ln_wage |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                      -------------+----------------------------------------------------------------
                               age |   .0181349   .0003477    52.16   0.000     .0174534    .0188164
                             _cons |   1.148214   .0102579   111.93   0.000     1.128107     1.16832
                      -------------+----------------------------------------------------------------
                           sigma_u |  .40635023
                           sigma_e |  .30349389
                               rho |  .64192015   (fraction of variance due to u_i)
                      ------------------------------------------------------------------------------
                      F test that all u_i=0:     F(4709, 23799) =     8.81         Prob > F = 0.0000
                      
                      . test age
                      
                       ( 1)  age = 0
                      
                             F(  1, 23799) = 2720.20
                                  Prob > F =    0.0000
                      
                      . * F value for age is the t value squared
                      
                      . di 52.16^2
                      2720.6656
                      
                      . * Cannot reject values that fall within the CI for age
                      
                      . test age = .0175
                      
                       ( 1)  age = .0175
                      
                             F(  1, 23799) =    3.33
                                  Prob > F =    0.0679
                      
                      . test age = .0188
                      
                       ( 1)  age = .0188
                      
                             F(  1, 23799) =    3.66
                                  Prob > F =    0.0558
                      -------------------------------------------
                      Richard Williams, Notre Dame Dept of Sociology
                      StataNow Version: 19.5 MP (2 processor)

                      EMAIL: [email protected]
                      WWW: https://www3.nd.edu/~rwilliam

                      Comment

                      Working...
                      X