Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Quasi-complete separation with conditional logit

    hi statalists, i have the following problem that i cannot handle by myself. I am estimating a regression of a zero-one variable (y=1 if success) on a zero/one variable (x=1 if a given event occurs) with panel (years*countries) data. my coefficient of interest is beta (y=ci+ti+beta x). the structure of my data is such that y=0 always when x=1 (x=1 perfectly predicts the outcome y=0), by contrast both y=0 and y=1 occur when x=0. hence I have a quasi-separation problem. I estimate a linear probability model (LPM) with time and countries fixed effects and everything is fine because OLS works ok in these cases. then I try with discrete choice models and I incur a quasi-complete separation problem because they employ by maximum likelihood estimation methods. when I run a standard logit model (no country and time f.e.), stata does not estimate the coefficients on the x that perfectly predicts the outcome because maximum likelihood estimate of beta does not exist in these cases. now I try with a conditional logit, where I condition for country fixed effects. stata correctly drops all the country dummies associated with countries that never experience success (y=0 always). but, besides this, works ok and gives me reasonable estimation results. my question is: why doesn't the conditional logit model incur the quasi-complete separation problem? since (as far as I understand) it also uses maximum likelihood, shouldn't it be that the maximized value function of the likelihood is also unbounded when the independent variable, x, perfectly predicts the outcome, y? many thanks for your help! mary

  • #2
    Dear Mary,

    As the name suggests, the conditional logit (aka fixed effects logit) is estimated by conditional maximum likelihood, not by maximum likelihood. So, the problems being considered by the two estimators are rather different. More specifically, in the standard logit case x is being used to predict whether y is 0 or 1, while in the conditional logit x is being used to predict which particular sequence of zeros and ones occurred, given the sum of ys. Therefore, x may be a perfect predictor in the regular logit case, but not when you condition on the sum of the ys. Keep in mind that the conditional logit has important limitations; for example, it does not allow you to compute partial effects of x on the probability that y is equal to 1.

    Comment


    • #3
      many thanks joao. this really helps. best regards. mary

      Comment

      Working...
      X