Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • OLS vs. ANOVA

    My understanding is that -regress- and -anova- are similar/identical. But I have a biological study which gives me conflicting outputs. We have five groups of five animals given different doses of a drug, and a response variable:
    Code:
    . anova response dose
    
                             Number of obs =         25    R-squared     =  0.3047
                             Root MSE      =    2.80447    Adj R-squared =  0.1656
    
                      Source | Partial SS         df         MS        F    Prob>F
                  -----------+----------------------------------------------------
                       Model |  68.921099          4   17.230275      2.19  0.1069
                             |
                        dose |  68.921099          4   17.230275      2.19  0.1069
                             |
                    Residual |  157.30149         20   7.8650744  
                  -----------+----------------------------------------------------
                       Total |  226.22259         24   9.4259411  
    
    . regress response i.dose
    
          Source |       SS           df       MS      Number of obs   =        25
    -------------+----------------------------------   F(4, 20)        =      2.19
           Model |  68.9210991         4  17.2302748   Prob > F        =    0.1069
        Residual |  157.301487        20  7.86507437   R-squared       =    0.3047
    -------------+----------------------------------   Adj R-squared   =    0.1656
           Total |  226.222587        24  9.42594111   Root MSE        =    2.8045
    
    ------------------------------------------------------------------------------
        response |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
            dose |
             10  |   1.993438   1.773705     1.12   0.274    -1.706446    5.693321
             50  |   2.910306   1.773705     1.64   0.116    -.7895783     6.61019
            100  |   4.432047   1.773705     2.50   0.021     .7321629    8.131931
            200  |   4.421635   1.773705     2.49   0.022     .7217513    8.121519
                 |
           _cons |   13.86277   1.254199    11.05   0.000     11.24656    16.47899
    ------------------------------------------------------------------------------
    R values are the same, as to be expected, but the p value from -anova- indicates no significant difference across groups, hence no need for -pwcompare-, while -regress- shows differences in the two highest dose groups.

    The problem with -anova- is that we have some outliers:
    Click image for larger version

Name:	test.png
Views:	2
Size:	78.9 KB
ID:	1416712

    Is -regress- a better option in this case?

    Data:
    Code:
    * Example generated by -dataex-. To install: ssc install dataex
    clear
    input int dose double response
      0 10.99277305
      0 16.23023442
      0 15.34134276
      0 13.17231787
      0 13.57720313
     10 14.61668459
     10 17.15264022
     10 16.05454947
     10 15.15194346
     10   16.305241
     50 16.35721169
     50 19.48033492
     50 12.18403701
     50  16.1171514
     50 19.72666485
    100 17.87319768
    100 12.98239008
    100 22.46244988
    100 19.36247368
    100 18.79359433
    200 12.00186494
    200  19.9286369
    200  19.0495354
    200 19.31872792
    200 21.12328229
    end
    Attached Files
    Stata 14.2MP
    OS X

  • #2
    They're the same (below). And you don't have any outliers.

    .ÿclear

    .ÿinputÿintÿdoseÿdoubleÿresponse

    ÿÿÿÿÿÿÿÿÿdoseÿÿÿÿresponse
    ÿÿ1.ÿÿÿ0ÿ10.99277305
    ÿÿ2.ÿÿÿ0ÿ16.23023442
    ÿÿ3.ÿÿÿ0ÿ15.34134276
    ÿÿ4.ÿÿÿ0ÿ13.17231787
    ÿÿ5.ÿÿÿ0ÿ13.57720313
    ÿÿ6.ÿÿ10ÿ14.61668459
    ÿÿ7.ÿÿ10ÿ17.15264022
    ÿÿ8.ÿÿ10ÿ16.05454947
    ÿÿ9.ÿÿ10ÿ15.15194346
    ÿ10.ÿÿ10ÿÿÿ16.305241
    ÿ11.ÿÿ50ÿ16.35721169
    ÿ12.ÿÿ50ÿ19.48033492
    ÿ13.ÿÿ50ÿ12.18403701
    ÿ14.ÿÿ50ÿÿ16.1171514
    ÿ15.ÿÿ50ÿ19.72666485
    ÿ16.ÿ100ÿ17.87319768
    ÿ17.ÿ100ÿ12.98239008
    ÿ18.ÿ100ÿ22.46244988
    ÿ19.ÿ100ÿ19.36247368
    ÿ20.ÿ100ÿ18.79359433
    ÿ21.ÿ200ÿ12.00186494
    ÿ22.ÿ200ÿÿ19.9286369
    ÿ23.ÿ200ÿÿ19.0495354
    ÿ24.ÿ200ÿ19.31872792
    ÿ25.ÿ200ÿ21.12328229
    ÿ26.ÿend

    .ÿ
    .ÿquietlyÿregressÿresponseÿi.dose

    .ÿtestparmÿi.dose

    ÿ(ÿ1)ÿÿ10.doseÿ=ÿ0
    ÿ(ÿ2)ÿÿ50.doseÿ=ÿ0
    ÿ(ÿ3)ÿÿ100.doseÿ=ÿ0
    ÿ(ÿ4)ÿÿ200.doseÿ=ÿ0

    ÿÿÿÿÿÿÿF(ÿÿ4,ÿÿÿÿ20)ÿ=ÿÿÿÿ2.19
    ÿÿÿÿÿÿÿÿÿÿÿÿProbÿ>ÿFÿ=ÿÿÿÿ0.1069

    .ÿpredictÿdoubleÿres,ÿresiduals

    .ÿ
    .ÿpauseÿon

    .ÿrvfplot

    .ÿpause
    pause:ÿÿ
    ->ÿ.ÿq
    executionÿresumes...

    .ÿ
    .ÿpnormÿres

    .ÿpause
    pause:ÿÿ
    ->ÿ.ÿq
    executionÿresumes...

    .ÿ
    .ÿqnormÿres

    .ÿpause
    pause:ÿÿ
    ->ÿ.ÿq
    executionÿresumes...

    .ÿ
    .ÿexit

    endÿofÿdo-file


    .

    Comment


    • #3
      The "nptrend" test gives a significant test for these data. As does regress treating the x variable as continuous. Sometimes the omnibus test will not give a result that agrees with specific contrasts. You have enough data to declare a trend, but maybe you'd want more data.



      Comment


      • #4
        With 5 groups and just five ‘subjects’ per group, (lack of) power is a matter of concern. Anyway, this may also explain the difference being statistically significant only for the extreme contrasts, albeit the clues of a trend in the boxplots.
        Last edited by Marcos Almeida; 01 Nov 2017, 18:10.
        Best regards,

        Marcos

        Comment


        • #5
          Thank you, all, I appreciate the feedback. I have to admit to some confusion as to what -regress- is showing me:
          Code:
          . regress response i.dose
          
                Source |       SS           df       MS      Number of obs   =        25
          -------------+----------------------------------   F(4, 20)        =      2.19
                 Model |  68.9210991         4  17.2302748   Prob > F        =    0.1069
              Residual |  157.301487        20  7.86507437   R-squared       =    0.3047
          -------------+----------------------------------   Adj R-squared   =    0.1656
                 Total |  226.222587        24  9.42594111   Root MSE        =    2.8045
          
          ------------------------------------------------------------------------------
              response |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                  dose |
                   10  |   1.993438   1.773705     1.12   0.274    -1.706446    5.693321
                   50  |   2.910306   1.773705     1.64   0.116    -.7895783     6.61019
                  100  |   4.432047   1.773705     2.50   0.021     .7321629    8.131931
                  200  |   4.421635   1.773705     2.49   0.022     .7217513    8.121519
                       |
                 _cons |   13.86277   1.254199    11.05   0.000     11.24656    16.47899
          ------------------------------------------------------------------------------
          
          . regress response dose
          
                Source |       SS           df       MS      Number of obs   =        25
          -------------+----------------------------------   F(1, 23)        =      6.08
                 Model |  47.3112368         1  47.3112368   Prob > F        =    0.0215
              Residual |   178.91135        23  7.77875434   R-squared       =    0.2091
          -------------+----------------------------------   Adj R-squared   =    0.1748
                 Total |  226.222587        24  9.42594111   Root MSE        =     2.789
          
          ------------------------------------------------------------------------------
              response |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                  dose |   .0188323   .0076362     2.47   0.022     .0030357     .034629
                 _cons |   15.25833   .7832222    19.48   0.000     13.63811    16.87855
          ------------------------------------------------------------------------------
          I had assumed that, since the output of -regress- with dose as a continuous variable showed a significant trend with dose, that the output with dose as a factor variable would give me the p values for each group. This led me to conclude that the 100 and 200 groups were significantly different to the 0 group (control). If this is not the case, please can you explain what the p values of 0.21 and 0.22 in the regression table refer to? They seem to be at odds with the output of -testparm-.

          The "nptrend" test gives a significant test for these data.
          Thanks for that. I had found a significant trend also with -jonter-, but turned to -regress- to try and get some information from a more quantitative perspective.
          Stata 14.2MP
          OS X

          Comment


          • #6
            Nigel:
            as an aside to previous helpful comments, see also https://www.statalist.org/forums/for...eroscedstisity for a quantitative test concerning the ratio between the squared number of parameters and sample size (the ratio is expected to go to zero as your sample size goes to infinity. In your case you have 4 parameters+1 constant=5 and 25 observations. Hence (5^2)/25=1).
            As expected, R-squared are the same in your models, but -regress- outcome is telling you that, when contrasted against the reference category (no drug), posology of 100 or 200 show a significant difference in response. Set aside the apparent need to get more data (if feasible), I would also wonder whether the -no drug- category is meaningful in your research field. It is not my ball park, but it seems reasonabe that no drug ends up in no response whereas higher doses of the same drug do.
            Kind regards,
            Carlo
            (Stata 19.0)

            Comment


            • #7
              Carlos

              Many thanks for that helpful response.

              What we are looking at here is an increase in an endogenous metabolite in response to treatment. So the reference group (0 drug) is the baseline against which the levels in the treatment groups are compared.

              Incidentally, when I ran -margins- after -regress- with the i.dose indepvar, I got the same result as the -regress- table:
              Code:
              . margins, dydx(dose)
              
              Conditional marginal effects                    Number of obs     =         25
              Model VCE    : OLS
              
              Expression   : Linear prediction, predict()
              dy/dx w.r.t. : 10.dose 50.dose 100.dose 200.dose
              
              ------------------------------------------------------------------------------
                           |            Delta-method
                           |      dy/dx   Std. Err.      t    P>|t|     [95% Conf. Interval]
              -------------+----------------------------------------------------------------
                      dose |
                       10  |   1.993438   1.773705     1.12   0.274    -1.706446    5.693321
                       50  |   2.910306   1.773705     1.64   0.116    -.7895783     6.61019
                      100  |   4.432047   1.773705     2.50   0.021     .7321629    8.131931
                      200  |   4.421635   1.773705     2.49   0.022     .7217513    8.121519
              ------------------------------------------------------------------------------
              Note: dy/dx for factor levels is the discrete change from the base level.
              This isn't too surprising in itself, but it still seems at odds with Joseph's conclusion with -testparm- (unless I've totally misunderstood the outcome!).
              Stata 14.2MP
              OS X

              Comment


              • #8
                Nigel:
                Thnks for the clarifications.
                I think that the limping results you get are due to the limited sample size: you may want to take a look at -help power oneway- te get a more comprehensive coverage of the issue.
                As an aside, -regress-, in general, outperforms -anova- (see -regress postestimation- for example). This may be a reason why on this forum queries on -anova- are less frequent than they were in the past.
                Last edited by Carlo Lazzaro; 02 Nov 2017, 01:46.
                Kind regards,
                Carlo
                (Stata 19.0)

                Comment


                • #9
                  Carlo (apologies for renaming you earlier, I think that my post may have been hijacked by autocorrect),

                  Thank you for this clarification. I will definitely look into -power oneway-. But -anova- and -regress- aren't that different in this case, the overall p value is the same for each (Prob>F = 0.1069).

                  The problem that's still troubling me is the difference in the output given by -margins- and -testparm-. The way that I read it is that the former shows 100 and 200 to be different to 0 at the 5% level, while the latter does not. Or is it the case that -testparm- is just telling us that there is no difference in the coefficients between different levels of i.dose?

                  By the way, the small sample size is, I'm afraid, just a fact of life for those of us in the biological sciences (ethical & welfare concerns).
                  Stata 14.2MP
                  OS X

                  Comment


                  • #10
                    Nigel:
                    something similar occurs in the following toy-example (asterisks are mine):
                    Code:
                    . regress price c.trunk
                    
                          Source |       SS           df       MS      Number of obs   =        74
                    -------------+----------------------------------   F(1, 72)        =      7.89
                           Model |  62747229.9         1  62747229.9   Prob > F        =    0.0064
                        Residual |   572318166        72  7948863.42   R-squared       =    0.0988
                    -------------+----------------------------------   Adj R-squared   =    0.0863
                           Total |   635065396        73  8699525.97   Root MSE        =    2819.4
                    
                    ------------------------------------------------------------------------------
                           price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                    -------------+----------------------------------------------------------------
                           *trunk |   216.7482   77.14554     2.81   0.006     62.96142     370.535*
                           _cons |   3183.504   1110.728     2.87   0.005     969.3088    5397.699
                    ------------------------------------------------------------------------------
                    
                    . anova price trunk
                    
                                             Number of obs =         74    R-squared     =  0.3197
                                             Root MSE      =    2777.65    Adj R-squared =  0.1131
                    
                                      Source | Partial SS         df         MS        F    Prob>F
                                  -----------+----------------------------------------------------
                                       Model |  2.030e+08         17    11941591      1.55  0.1117
                                             |
                                       trunk |  2.030e+08         17    11941591      1.55  0.1117
                                             |
                                    Residual |  4.321e+08         56   7715327.6 
                                  -----------+----------------------------------------------------
                                       Total |  6.351e+08         73     8699526 
                          
                    . regress price i.trunk
                    
                          Source |       SS           df       MS      Number of obs   =        74
                    -------------+----------------------------------   F(17, 56)       =      1.55
                           Model |   203007048        17  11941591.1   Prob > F        =    0.1117
                        Residual |   432058348        56  7715327.64   R-squared       =    0.3197
                    -------------+----------------------------------   Adj R-squared   =    0.1131
                           Total |   635065396        73  8699525.97   Root MSE        =    2777.6
                    
                    ------------------------------------------------------------------------------
                           price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                    -------------+----------------------------------------------------------------
                           trunk |
                              6  |       1730   3928.187     0.44   0.661    -6139.105    9599.105
                              7  |  -241.3333   3207.351    -0.08   0.940     -6666.43    6183.764
                              8  |     1154.4   3042.761     0.38   0.706    -4940.982    7249.782
                              9  |    -682.75   3105.505    -0.22   0.827    -6903.824    5538.324
                             10  |        6.8   3042.761     0.00   0.998    -6088.582    6102.182
                             11  |    216.875    2946.14     0.07   0.942    -5684.954    6118.704
                             12  |   2392.333   3207.351     0.75   0.459    -4032.764     8817.43
                             13  |    2592.75   3105.505     0.83   0.407    -3628.324    8813.824
                             14  |    4267.25   3105.505     1.37   0.175    -1953.824    10488.32
                             15  |     3054.2   3042.761     1.00   0.320    -3041.182    9149.582
                             16  |   1469.667   2891.068     0.51   0.613    -4321.838    7261.171
                             17  |   1455.875    2946.14     0.49   0.623    -4445.954    7357.704
                             *18  |       9095   3928.187     2.32   0.024     1225.895     16964.1*
                             20  |   2904.167   3000.203     0.97   0.337    -3105.961    8914.295
                             21  |       1566    3401.91     0.46   0.647    -5248.845    8380.845
                             22  |       6998   3928.187     1.78   0.080    -871.1047     14867.1
                             23  |       1666   3928.187     0.42   0.673    -6203.105    9535.105
                                 |
                           _cons |       4499   2777.648     1.62   0.111    -1065.297     10063.3
                    ------------------------------------------------------------------------------
                    
                    . testparm i.(trunk)
                    
                     ( 1)  6.trunk = 0
                     ( 2)  7.trunk = 0
                     ( 3)  8.trunk = 0
                     ( 4)  9.trunk = 0
                     ( 5)  10.trunk = 0
                     ( 6)  11.trunk = 0
                     ( 7)  12.trunk = 0
                     ( 8)  13.trunk = 0
                     ( 9)  14.trunk = 0
                     (10)  15.trunk = 0
                     (11)  16.trunk = 0
                     (12)  17.trunk = 0
                     (13)  18.trunk = 0
                     (14)  20.trunk = 0
                     (15)  21.trunk = 0
                     (16)  22.trunk = 0
                     (17)  23.trunk = 0
                    
                           F( 17,    56) =    1.55
                                *Prob > F =    0.1117*
                    
                    
                    . margins, dydx( trunk )
                    
                    Conditional marginal effects                    Number of obs     =         74
                    
                    Expression   : Linear prediction, predict()
                    dy/dx w.r.t. : 6.trunk 7.trunk 8.trunk 9.trunk 10.trunk 11.trunk 12.trunk 13.trunk 14.trunk 15.trunk 16.trunk
                                   17.trunk 18.trunk 20.trunk 21.trunk 22.trunk 23.trunk
                    
                    ------------------------------------------------------------------------------
                                 |            Delta-method
                                 |      dy/dx   Std. Err.      t    P>|t|     [95% Conf. Interval]
                    -------------+----------------------------------------------------------------
                           trunk |
                              6  |       1730   3928.187     0.44   0.661    -6139.105    9599.105
                              7  |  -241.3333   3207.351    -0.08   0.940     -6666.43    6183.764
                              8  |     1154.4   3042.761     0.38   0.706    -4940.982    7249.782
                              9  |    -682.75   3105.505    -0.22   0.827    -6903.824    5538.324
                             10  |        6.8   3042.761     0.00   0.998    -6088.582    6102.182
                             11  |    216.875    2946.14     0.07   0.942    -5684.954    6118.704
                             12  |   2392.333   3207.351     0.75   0.459    -4032.764     8817.43
                             13  |    2592.75   3105.505     0.83   0.407    -3628.324    8813.824
                             14  |    4267.25   3105.505     1.37   0.175    -1953.824    10488.32
                             15  |     3054.2   3042.761     1.00   0.320    -3041.182    9149.582
                             16  |   1469.667   2891.068     0.51   0.613    -4321.838    7261.171
                             17  |   1455.875    2946.14     0.49   0.623    -4445.954    7357.704
                             18  |       9095   3928.187     2.32   0.024     1225.895     16964.1
                             20  |   2904.167   3000.203     0.97   0.337    -3105.961    8914.295
                             21  |       1566    3401.91     0.46   0.647    -5248.845    8380.845
                             22  |       6998   3928.187     1.78   0.080    -871.1047     14867.1
                             23  |       1666   3928.187     0.42   0.673    -6203.105    9535.105
                    ------------------------------------------------------------------------------
                    Note: dy/dx for factor levels is the discrete change from the base level.
                    When -trunk- was treated as continuous, it reach statistical significance.
                    When -trunk- was made categorical, the only significant level is 18 (as per -regress- and -margins-); -testparm- does not reach statistical significance (like F-test of -regress- and -anova-).
                    However, categorizing a continuous predictor is not without cost, as you can read in my favourite reference on this topic: http://citeseerx.ist.psu.edu/viewdoc...=rep1&type=pdf
                    Eventually, I sse the issue with limited sample size in your research field: it's the saem with rare diseases, and there's nothing you can do to change that.
                    Kind regards,
                    Carlo
                    (Stata 19.0)

                    Comment


                    • #11
                      Carlo

                      Thank you for your feedback and insight. I agree that categorisation of a continuous predictor is not always desirable, especially in occupational epidemiology studies where exposure, for example, is rarely fixed.

                      In this case, however, exposure is fixed (or, is it?), and so while dose/exposure is a continuum (we could dose 10, 50..., or 10, 12.5, 50...), it is also categorical.

                      But that is one area I might take issue with Royston et al. In experimental animal studies, we often administer the test material by gastric intubation, feed/diet, or inhalation. But there will be a variance around the received dose that we can rarely quantify (e.g. in inhalation studies, animals will be exposed together in a chamber, but the received dose will vary between individuals based on body weight, respiratory rate, etc.) In such cases, the category of dose is the only thing that is truly set, and therefore the only true indepvar.
                      Stata 14.2MP
                      OS X

                      Comment


                      • #12
                        Nigel:
                        I see the issue.
                        As my family is composed of three humans and three cats, it's my every day experience that their behaviours in eating, playing, fighting each others and resting is subject to rmarkable within and between variance! .
                        Kind regards,
                        Carlo
                        (Stata 19.0)

                        Comment


                        • #13
                          Joseph and Carlo gave excellent explanation.

                          As I remarked in #4, there is, potentially, low power.

                          To put it bluntly, with 25 "individuals" divided equally in 5 groups, considering an alpha of 0.05 and the power equal to 0.82, we'd need a humongous effect size (beyond 0.8, and we shall consider >= 0.40 as "large"!) to spot a significant difference.

                          As a matter of fact, both ANOVA and regression (with - testparm - command) are telling a similar thing: as a whole, we cannot say there is a statistical difference between groups.

                          Expressing myself without trying to beat about the bush (and avoiding discussion on the fact that it is a "crude" model, i.e., it doesn't have covariates, hence things would probably get even worse in the "adjusted" full model), finding a statistical difference in the "omnibus" test would be like wishing to test whether the elephant is significantly bigger than the ant.

                          Moreover, albeit my unawareness whether it is a unanimity (or not), when one gets a "omnibus" F statistic below the critical value, as far as I'm concerned, one tends to give the "detailed" output (with levels of the categorical variable) a pass, at least in terms of proposing an inferencial point of view.

                          I do understand that there is always the issue on the "welfare and ethical concerns" as you dutifully pointed out, I'm very considerate about that and I know this is a scenario to be oftentimes faced, unfortunately.

                          On the other hand, in the end of the day, perhaps we should stand up and reply to the committee that relying on a potentially underpowered study may not be the best approach to curb this issue, for a bunch of good reasons, loss of time, effort and investiment, as well as boosting beta error and "null hypothesis" nemesis, to name a few.

                          That said, and trying to find an alternative to what seems to be, to some extent, a predicament - with a warning that I don't believe much in miracles -, maybe you should try permutation tests, or perhaps you should select a Bayesian model.

                          Hopefully it truly helps.
                          Last edited by Marcos Almeida; 02 Nov 2017, 05:49.
                          Best regards,

                          Marcos

                          Comment


                          • #14
                            Thank you, Marcos,

                            I'm afraid, then, that I still don't understand what the highlighted p values are telling me:

                            Click image for larger version

Name:	ols.PNG
Views:	1
Size:	13.8 KB
ID:	1416792


                            I assumed that these values (given here by -margins-, but identical to the regression table) were the significance compared to the reference group. But that seems to be at odds with

                            both ANOVA and regression (with - testparm - command) are telling a similar thing: as a whole, we cannot say there is a statistical difference between groups
                            Stata 14.2MP
                            OS X

                            Comment


                            • #15
                              The highlighted p-values are the difference in means for those doses versus the reference dose which is the 0 level. They are at odds with the omnibus test gotten either from the overall ANOVA F test or from the testparm command. The omnibus test is not a gate you can't cross, especially if you planned to look at the contrasts of interest. That being said, if you corrected for the 4 multiple tests, you might still conclude you lack enough evidence. I personally find p-values close the 5% equivocal. I'd rather see 1% or 1/10%. Again, you at least have evidence of a dose effect.


                              .
                              Code:
                               regress response i.dose
                              
                                    Source |       SS           df       MS      Number of obs   =        25
                              -------------+----------------------------------   F(4, 20)        =      2.19
                                     Model |  68.9210991         4  17.2302748   Prob > F        =    0.1069
                                  Residual |  157.301487        20  7.86507437   R-squared       =    0.3047
                              -------------+----------------------------------   Adj R-squared   =    0.1656
                                     Total |  226.222587        24  9.42594111   Root MSE        =    2.8045
                              
                              ------------------------------------------------------------------------------
                                  response |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                              -------------+----------------------------------------------------------------
                                      dose |
                                       10  |   1.993438   1.773705     1.12   0.274    -1.706446    5.693321
                                       50  |   2.910306   1.773705     1.64   0.116    -.7895783     6.61019
                                      100  |   4.432047   1.773705     2.50   0.021     .7321629    8.131931
                                      200  |   4.421635   1.773705     2.49   0.022     .7217513    8.121519
                                           |
                                     _cons |   13.86277   1.254199    11.05   0.000     11.24656    16.47899
                              ------------------------------------------------------------------------------
                              
                              . pwcompare dose, mcompare(dunnett) pveffects
                              
                              Pairwise comparisons of marginal linear predictions
                              
                              Margins      : asbalanced
                              
                              ---------------------------
                                           |    Number of
                                           |  Comparisons
                              -------------+-------------
                                      dose |            4
                              ---------------------------
                              
                              -----------------------------------------------------
                                           |                             Dunnett
                                           |   Contrast   Std. Err.      t    P>|t|
                              -------------+---------------------------------------
                                      dose |
                                10 vs   0  |   1.993438   1.773705     1.12   0.631
                                50 vs   0  |   2.910306   1.773705     1.64   0.319
                               100 vs   0  |   4.432047   1.773705     2.50   0.068
                               200 vs   0  |   4.421635   1.773705     2.49   0.069
                              -----------------------------------------------------

                              Comment

                              Working...
                              X