Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Probit - comparison across groups

    Hello everyone,

    I learned through this forum that comparison across groups (e.g., men vs women) using probit regressions is problematic.
    Why? beacuse the variance of the residuals will be different in each group (the 2 groups may have different unobservables that have different variances) and this will show a difference in the coefficients that is due to this variance difference (since the coefficients and the error variance are not separately identified) and not because of the real difference in the effects. Right?

    Now, if I have to compare 2 groups using probit regressions.
    • Should I just use probit regressions and state the limitations?
    • Or, should I opt for a Linear Probability Model?
    • Or is it possible to compare the coefficients of the same variable between the 2 groups, while estimating the difference between the coefficients as explained by Clyde Schechter, in this post: https://www.statalist.org/forums/for...bit-regression?

    Thank you so much!
    Please let me know if you need more clarifications.

  • #2
    Marry:
    you may want to take a look at https://journals.sagepub.com/doi/10....24199028002003
    Kind regards,
    Carlo
    (Stata 18.0 SE)

    Comment


    • #3
      Dear Carlo Lazzaro,
      I just read that paper.
      It really helped me understand more the problem that may arise.
      But I wanted to ask two more question
      1. The paper is from 1999, so is there any updated methods in Stata that can help correct this problem (with the same idea suggested by Allison, 1999)?
      2. Second, if I understood well the paper, it means that we can still keep the conventional probit regressions for each group, and interpret the resuts as the overall differences between the group (due to gender itself for example, and due to residual heterogeneity), or is this nonsense?
      Thank you so much!

      Comment


      • #4
        To Carlo's recommendation, I would like to add: https://journals.sagepub.com/doi/10....49124117747306 who cite this working paper of Maarten Buis: http://www.maartenbuis.nl/wp/ odds_ratio_3.1.pdf
        http://publicationslist.org/eric.melse

        Comment


        • #5
          Marry:
          1) not that I know, but I should admit that most of my research is based on linear regressions;
          2) the issue is exactly the one reported by Paul Allison's article: the interpretation is critical as residual variation may play a confounding effect.
          Kind regards,
          Carlo
          (Stata 18.0 SE)

          Comment


          • #6
            I thank you so much Carlo Lazzaro and ericmelse for your very helpful comments.

            Comment


            • #7
              I think of this differently. It's always difficult to compare probit coefficients across different models or subpopulations because the coefficients do not give the magnitudes of the effects on the response probabilities. One should compare average partial (marginal) effects instead. The APEs already account for the different scales. One can do this using separate probits or, with clever use of margins, a single probit with interactions. Unfortunately, I don't know how to tell Stata to compute a standard error for a difference in two APEs or a set of APEs. (suest does not seem to work because the delta method is used. It should work, so maybe in Stata 18? I'd be very interested to know how to do this other than bootstrapping.)

              Code:
              probit y x1 x2 ... i.xk if g == 0
              margins, dydx(*)
              probit y x1 x2 ... i.xk if g == 1
              margins dydx(*)
              or

              Code:
              probit y c.x1 c.x2 ... i.xk c.g c.g#c.x1 c.g#c.x2 ... c.g#.i.xk
              margins, dydx(*) at (g = 0) subpop(if ~g)
              margins, dydx(*) at (g = 1) subpop(if g)

              Comment


              • #8
                Jeff Wooldridge Thank you so much for your important remark and suggestion.
                So, if I understand your answer well, I just have to report the average partial effects in the paper for each group (but not probit results, since they are of no use for us anyway).

                Unfortunately, I don't know how to tell Stata to compute a standard error for a difference in two APEs or a set of APEs.
                Using the APEs, we can see whether the effects are different between the two groups. However, you are saying we cannot compare how imporatant the difference is between the APEs of the two groups, right?

                Comment


                • #9
                  Yes, that's correct. You can easily see whether the effects are practically different by comparing their APEs. Putting a confidence interval around that difference is more difficult if I understand Stata's limitations. As I said, bootstrapping is always a possibility. I would be careful in using the term "important." A difference may or may not be practically large and may or may not be statistically significant.

                  Comment


                  • #10
                    Jeff Wooldridge Carlo Lazzaro In addition to the above, I would like to mention this recent paper published in Sociological Methodology by Trenton D. Mize, Long Doan and J. Scott Long: A general framework for comparing predictions and marginal effects across models.
                    They discuss a methodology that (also) allows for a group comparison using one model (as well as two models) while 'standard errors are corrected for clustering' (see Table 7. p/ 183) using Stata's gsem and mlincom.
                    Examples with dta files are provided here.
                    Their method allows for logit and probit models through the gsem link option.
                    http://publicationslist.org/eric.melse

                    Comment


                    • #11
                      Thank you so much Jeff Wooldridge !

                      Comment

                      Working...
                      X