Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Compare the regression slopes of two different predictors in the same regression model

    Dear all,

    With a logistic regression, now I try to compare the coefficients of two different predictors on the same dependent variable, in order to see which one is more important/salient for the prediction of DV.

    I'm not sure whether the command of -lincom- is appropriate in this context? For example,
    Code:
    lincom _b[x2] - _b[x1]

    Could you give me some advice? Thx!





  • #2
    This will test if the parameters are the same. This is not necessarily useful - rescaling either variable will change the test but not change the explained variance.
    Some folks who want your kind of decision will standardize the x's to make them comparable.

    Comment


    • #3
      Thanks Phil. But I think "_b[name]" is the standardized version of regression coefficient, right?




      Comment


      • #4
        no, it is not; further, I disagree with those who argue that standardizing makes such things comparable; you need to use your substantive or contextual knowledge here to set up something meaningful

        to see that _b[name] is not standardized (unless the entire regression was standardized); try the following:
        Code:
        sysuse auto, clear
        regress price weight
        di _b[weight]
        Last edited by Rich Goldstein; 20 Nov 2016, 07:30.

        Comment


        • #5
          Originally posted by Rich Goldstein View Post
          no, it is not; further, I disagree with those who argue that standardizing makes such things comparable; you need to use your substantive or contextual knowledge here to set up something meaningful
          I agree with Rich. Here is a short note I wrote for myself that summarizes John Fox's comments on standardized regression coefficients.
          In his book "Applied Regression Analysis and Generalized Linear Models" (2008, Sage), John Fox is very cautious about the use of standardized regression coefficients. He gives this interesting example. When two variables are measured on the same scale (e.g.,years of education, and years of employment), then relative impact of the two can be compared directly. But suppose those two variables differ substantially in the amount of spread. In that case, comparison of the standardized regression coefficients would likely yield a very different story than comparison of the raw regression coefficients. Fox then says:

          "If expressing coefficients relative to a measure of spread potentially distorts their comparison when two explanatory variables are commensurable [i.e., measured on the same scale], then why should the procedure magically allow us to compare coefficients [for variables] that are measured in different units?" (p. 95)

          Good question!

          A page later, Fox adds the following:

          "A common misuse of standardized coefficients is to employ them to make comparisons of the effects of the same explanatory variable in two or more samples drawn from different populations. If the explanatory variable in question has different spreads in these samples, then spurious differences between coefficients may result, even when _unstandardized_ coefficients are similar; on the other hand, differences in unstandardized coefficients can be masked by compensating differences in dispersion." (p. 96)

          And finally, this comment on whether or not Y has to be standardized:

          "The usual practice standardizes the response variable as well, but this is an inessential element of the computation of standardized coefficients, because the _relative_ size of the slope coefficients does not change when Y is rescaled." (p. 95)

          HTH.
          --
          Bruce Weaver
          Email: [email protected]
          Version: Stata/MP 18.5 (Windows)

          Comment


          • #6
            Got it! Thanks for your notes!

            Comment


            • #7
              I'll just pile on here. Even if there were some statistical procedure that could tell you which of two variables had a greater impact on the outcome (and there isn't), it would not follow that the one with the greater impact is more "important." The importance of a predictor might also depend on its distribution in the population of interest, and might also depend on whether it can be modified at will. The question "which is more important" does not belong in the domain of statistics. At best it belongs in the realm of the scientific domain of the study. More often, it's just a meaningless inquiry.

              Comment


              • #8
                Originally posted by Clyde Schechter View Post
                I'll just pile on here. Even if there were some statistical procedure that could tell you which of two variables had a greater impact on the outcome (and there isn't), it would not follow that the one with the greater impact is more "important." The importance of a predictor might also depend on its distribution in the population of interest, and might also depend on whether it can be modified at will. The question "which is more important" does not belong in the domain of statistics. At best it belongs in the realm of the scientific domain of the study. More often, it's just a meaningless inquiry.
                Hi prof Schnechter, is it possible to get a reference for your insightful comment? thank you

                Comment


                • #9
                  Originally posted by Elvis Sumanti View Post
                  is it possible to get a reference for your insightful comment?
                  Its arguments are not self-evident?

                  Comment

                  Working...
                  X