Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Correctly Classified output

    Hey community!

    I hope you are well! I have an inquiry regardind the output from my logit model. What I understand is that my model is correclty classifying the true negatvies in a 99.3% (adoption). On the other hand, the sensitivity correctly clasified 2.73% (non-adoption) and the overall rate of correct classification is estimated to be 72.47%. Despite the great difference between Sensitivity & Specificity, can I conclude from this given outcome or should I do another test / treatment?

    Thanks!

    Logistic model for x_notill

    -------- True --------
    Classified | D ~D | Total
    -----------+--------------------------+-----------
    + | 3 2 | 5
    - | 107 284 | 391
    -----------+--------------------------+-----------
    Total | 110 286 | 396

    Classified + if predicted Pr(D) >= .5
    True D defined as x_notill != 0
    --------------------------------------------------
    Sensitivity Pr( +| D) 2.73%
    Specificity Pr( -|~D) 99.30%
    Positive predictive value Pr( D| +) 60.00%
    Negative predictive value Pr(~D| -) 72.63%
    --------------------------------------------------
    False + rate for true ~D Pr( +|~D) 0.70%
    False - rate for true D Pr( -| D) 97.27%
    False + rate for classified + Pr(~D| +) 40.00%
    False - rate for classified - Pr( D| -) 27.37%
    --------------------------------------------------
    Correctly classified 72.47%

  • #2
    You have, I infer, run -estat classification- without specifying a cutoff. By default, the cutoff of 0.5 is used. So what you see are the classification results when you predict positive when the predicted probability is 0.5 or greater. But 0.5 may not be a good cutoff for your test. So you should run -lroc- to get the receiver operating characteristics curve. You may find a point on that curve that gives you much better sensitivity with an acceptable decrease in specificity. I would not draw any conclusions about this particular test based only its performance with an arbitrary cutoff of 0.5 I would look for better cutoffs.

    Comment

    Working...
    X