Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ROC curve

    Hi,
    Dear stata expert and users,

    I am using ROC curve to compare prediction power of 3 different logit models.
    Besides looking at the AUROC value and shape of the curve, what else should I do in order to validate the prediction power?
    What is the syntax for p=value?

    Thanks for your helps.

  • #2
    "Prediction power" is a quite vague concept. If you hope for an answer, give some more information, for example: What is it that you want to predict? From what information do you want to predict it?
    I don't understand your question:
    What is the syntax for p=value

    Comment


    • #3
      Apologize.
      I am testing 3 existing financial distress prediction models using my own sample, which consist of companies financial information. The models are established model, therefore
      I am testing its applicability using my own sample.

      I want to identify which model could provide the highest prediction power in term of highest percentage of correct prediction (distress).
      I employed ROC and I know AUROC can provide the answer. However, I would like to know whether there is any method to verify/validate the AUROC value.
      And is using p-value/t-stat could help to identify the significance of the AUROC value? If yes, then how to get the significance value in stata?

      Comment


      • #4
        The AUROC does not quantify the percentage of correct predictions. In fact, the percentage of correct predictions for any given model will depend on the prior probability of your outcome (distress) and also where on the ROC curve you "operate" the test (i.e. what cutoff of the predictor you use). What the AUROC does quantify is the probability that as between a randomly selected entity with distress and a randomly selected entity without distress, the one with distress scores higher on the model's predictor. (This is sometimes called the two-point forced-choice probability.) It is a measure of model discrimination, not predictive accuracy.

        I have no idea what you mean by "verify/validate" the AUROC value.

        It is not common to test "the significance" of the AUROC value, and I'm not sure why you want to do this. Generally speaking AUROC values are judged by their magnitude (various people use different rules of thumb). If you really want to test against a null hypothesis (AUROC = 0.5, not 0, being the most sensible one), you can simply look at whether the confidence interval in the output includes 0.5.

        Sometimes it is of interest to test whether the AUROC's of different models differ significantly, and I suspect that may be what you are most interested in here. For that, have a look at the -roccomp- command.

        Comment


        • #5
          Clyde Schechter: Thank you for your explanation. Yes, I should says discriminate instead of predictive accuracy. I performed the -roccomp- command for all 3 models and its AUROC value is just the same if I run the model separately (using -roctab- command). Which value you are referring to ? -Sorry I dont get what you mean by "Sometimes it is of interest to test whether the AUROC's of different models differ significantly".

          I came across a literature which mentioned that AUROC can be biased, therefore it is suggested to use likelihood ratio (LR). Therefore, I wonder whether there is any other ways to
          cater this issue.

          Many thanks.

          Comment


          • #6
            You said you had 3 models. So let's say that the model estimates of the probability of distress are stored in variables model1 model2 and model3. And I assume that the actual distress outcome is coded as 0/1 in a variable called distress. Then the command -roccomp distress model1 model2 model3- will give you the AUROC for all three models and will test the hypothesis of equality among them. That may provide some of the information that you will use to decide which of the three models you prefer.

            Comment

            Working...
            X