Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Carlo Lazzaro View Post
    wanhaiyou:
    as far as the part of your code with interaction is concerned, I would write it a bit differently, using -fvvarlist-:
    Code:
    reg ln_wage c.age##i.race
    Results are different from the ones obtained with your code, in that, creating interaction by hand, makes Stata losing memory of the variables included in the interaction.
    Conversely, if you rely on -fvvarlist- for creating interaction and higher order terms, useful commands like -margins- and -marginsplot- (as Maarten pointed out) can be called easily, as in the following toy-example:
    Code:
    reg ln_wage c.age##i.racemargins race, dydx( age)
    marginsplot, xdimension(race)
    Hello Carlo,
    Could you please help me to solve the question as #15?
    Thanks very much
    Best regards,
    wanhaiyou

    Comment


    • #17
      It seems to me the answer in the "suest" Manual, in a technical Note at page 3:
      Earlier we said that standard errors from suest are identical to those obtained by specifying the vce(robust) option with each command individually. Thus if you fit a logistic model using logit with the vce(robust) option, you will get the same standard errors when you type suest directly after logit using the same data without the vce(robust) option. This is not true for multiple estimation results when the estimation samples are not all the same. The standard errors from suest will be slightly smaller than those from individual model fits using the vce(robust) option because suest uses a larger number of observations to estimate the simultaneous (co)variance matrix.
      This is also confirmed by the fact that, in the example reported above, there's a smaller robust SE (thus, a higher absolute z-statistics and a smaller p-value) of mpg for index=0 in the "suest" model. It seems to me the point is that "suest" leads more powerful estimates because it uses all the information simultaneously (it basically uses information from other samples to build the standard errors, by allowing for between-model covariances of parameter estimates.).


      Comment


      • #18
        Sorry I think my previous answer was uncorrect. I guess the proper answer may be found here:

        http://www.stata.com/statalist/archi.../msg01265.html

        In fact, in logistic regression just one parameter is estimated by "suest" (maybe due to the fact that the variance estimate is a function of the mean, i.e. estimated probability, so it does not require a function to be estimated autonomously), while (differently than with the "regress" command) with normal regression also a parameter for log(variance) is.

        Comment

        Working...
        X