Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • bootstrap for internal validation: how can we calculate the testing performance (the performance of bootstrapmodel in the original sample)?

    Dear statalist users,

    I used the following code in internal validation of our model, and get the new file of area (bootstrap AUC), diff (bootstrap AUC-base AUC (only one base AUC?)) and optimism (final AUC, I suppose?) of 200 bootstrap samples. The following is my code according to your suggestion in website:
    capture program drop optimism
    program define optimism, rclass
    preserve
    bsample
    logit AO agec i.sex i.jobm i.incomef i.snec bmi i.lungsymp i.mrcyn i.diaasthma
    lroc, nograph
    return scalar area_bootstrap = r(area)
    end

    logit AO agec i.sex i.jobm i.incomef i.snec bmi i.lungsymp i.mrcyn i.diaasthma
    lroc, nograph
    local base_ROC = r(area)
    tempfile sim_results
    simulate area = r(area_bootstrap), reps(200) seed(12345) saving(`sim_results'): optimism use 'sim_results', clear
    sum area
    gen diff = area - 0.7410
    gen optimism = 0.7410 - diff
    sum area
    sum diff
    sum optimism
    _pctile optimism, p(2.5 50 97.5)
    return list

    According to TRIPOD explanation and elaboration, the bootstrap validation should include 6 steps:
    1. Develop the prediction model in the orignial data and determine the apparent AUC.
    2. Generate a bootstrap sample.
    3. Develop a model using the bootstrap sample (applying all the same modeling and predictor selection methods), determining the apparent performace of the model on the bootstrhap sample and the test performance of the bootstrap model in the original sample. (My question is which is the codes of testing performance of the bootstrap model in the original sample?)
    4. Calculate the optimism as the difference between the bootstrap performance and test performance (Is the only base AUC the test performance?).
    5. Repeat steps 2 through 4 200 times.
    6. Average the estmates of optmism in step 5, and substract the value from the apparent performance obtained in step 1 to obtain the optimism-corrected estimate of performance.
    The main question is where is the code for testing performance (the performance of bootstrapmodel in the original sample)? Should we use the apparent performance obtained in step 1, instead of the testing performance?
    Another question is what command is for convert the log to predicted probbability for cox regression model? I know the command for logistic regression model is 'invlogit'.
    Many thanks!

  • #2
    The initial model is estimated by these lines:

    Code:
    logit AO agec i.sex i.jobm i.incomef i.snec bmi i.lungsymp i.mrcyn i.diaasthma
    lroc, nograph
    And the performance in terms of the area under the ROC curve is calculated in the next line

    Code:
    local base_ROC = r(area)
    Presumably the result of this area is 0.7410, since that's the value the code is using to compare the performance in the bootstrap samples to the original performance.

    Jorge Eduardo Pérez Pérez
    www.jorgeperezperez.com

    Comment


    • #3
      Prof Pérez, many thanks for your valuable feedback! Sorry to bother you, I want to ask more about it. Do you think the result of my code is the final AUC of the internal validation suggested by TRIPOD?

      Comment


      • #4
        These pieces are the return parts of the code.

        Code:
        sum area
        sum diff
        sum optimism
        _pctile optimism, p(2.5 50 97.5)
        return list
        The first calculates the average AUC in the bootstrap samples.
        The second calculates the average difference between the original AUC and the AUC in each bootstrap sample.
        The third calculates the average "optimism", the original AUC minus the difference.
        The fourth and fifth give the distribution of "optimism" in the bootstrap samples.
        Jorge Eduardo Pérez Pérez
        www.jorgeperezperez.com

        Comment


        • #5
          Sorry to be so late responding to this question. However, I don't believe that code so far applies the bootstrap generated models to the original sample. If I am wrong, I apologize.

          A note: Please as FAQ 12 requests, show all code and generated results between [CODE] and [/CODE] delimiters.

          The TRIPOD document seems to be referring to what Steyerberg et al. 2001 (p 776) call the "regular bootstrap" in the second paragraph below:

          Bootstrap resampling started with fitting the logistic model in a bootstrap sample of n subjects, which was drawn with replacement from the original sample . Averages of performance measures were taken over 100 repetitions (EPV 5, 10, 20) or 50 repetitions (EPV 40 or 80). These numbers were chosen since it was found that higher numbers only marginally improved the estimates. When stepwise selection was applied, it was applied in every bootstrap sample.

          For the regular bootstrap procedure, the model as estimated in the bootstrap sample was evaluated in the bootstrap sample and in the original sample. The performance in the bootstrap sample represents estimation of the apparent performance, and the performance in the original sample represents test performance. The difference between these performances is an estimate of the optimism in the apparent performance. This difference is averaged to obtain a stable estimate of the optimism. The optimism is subtracted from the apparent performance to estimate the internally validated performance : estimated performance =􏰀 apparent performance 􏰈–average(bootstrap performance 􏰈 – test performance).
          I'm sorry that I don't have the time to write the code in detail. Here is an outline. I'm sorry I don't have time to write code in detail. You've already written some of the pieces. Steps 3-5 are new.

          1. Generate a bootstrap sample with bsample
          2. Calculate the AUC for that sample
          3. Save the model coefficients from the bootstrap sample.
          Essentially
          Code:
           matrix b = e(b)
          4. Apply the same model coefficients to the original sample to get new estimates.
          Start with
          Code:
           matrix score y = b
          5. Calculate the AUC for the data in Step 3.
          6. Save the measures

          Repeat the process for each bootstrap sample and average the results.

          The resulting code should be put into a program, called by simulate. There is a related example Creating a bootstrap sample in the Manual entry for bstat.

          I hope that others will chime in, as due to the press of other work, I cannot give the question more attention.

          Reference
          Steyerberg, E. W., Harrell, J., Frank E, Borsboom, G. J. J. M., Eijkemans, M. J. C., Vergouwe, Y., & Habbema, J. D. F. (2001). Internal validation of predictive models: Efficiency of some procedures for logistic regression analysis. Journal of Clinical Epidemiology, 54(8), 774-781.
          Last edited by Steve Samuels; 28 Dec 2018, 15:58.
          Steve Samuels
          Statistical Consulting
          [email protected]

          Stata 14.2

          Comment


          • #6
            Prof Pérez, many thanks for your valuable feedback!

            Comment


            • #7
              Prof Samuels,
              Many thanks for your kind guidence. I am studying it and trying to figure out the correct codes accordingly.
              BW
              Jing Pan

              Comment


              • #8
                Originally posted by Jing Pan View Post
                Prof Samuels,
                Many thanks for your kind guidence. I am studying it and trying to figure out the correct codes accordingly.
                BW
                Jing Pan
                Dear Jing Pan,
                Did you manage to identify the correct method for doing this and if so would you be willing to share your code - I have a similar challenge and a worked example my help me apply the method to my data.
                Many Thanks

                Comment

                Working...
                X