Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • one-sample non-inferiority sample size bootstrap

    I am trying to assess sample size via bootstrap analysis. The goal is to compare the outcome of an intervention in a group of patients, to a known (good) outcome of x%. I want to test if the new intervention would perform better or equal to the old intervention. Sp basically I have a one sample comparison to known proportion, non-inferiority analysis. lets say the non-inferiority margin is 3% and given the resources I can only apply the new intervention on 25-50 patients.

    I am assuming the best way to do this is to get the results of the intervention on maximum possible between 25-50 and then get an estimate of the outcome via bootstrap resampling and see if the lower limit of bs based 95% CI is higher than the (x-3)% .
    a) would this be correct approach
    b) is there an optimal sample size for this bs analysis and how do iI estimate in STATA
    c) if I was not limited by resources then how do I estimate sample size for a one-sample non-inferior comparison of mean/proportion to a known or standard mean/proportion. I there a stata code for that.

    Thanks
    Ashar

  • #2
    First, I think that you mean simulation (see below) and not bootstrap. The confidence interval for a single proportion for a binomially distributed random variable can be readily estimated via each of several analytical formulas (see the help file for ci proportions). Take your pick; you don't need to resort to the bootstrap.

    Second, to estimate the required sample size for a given power or vice versa, you'll need not only the "historical control" estimate of the relative frequency of a good patient outcome with standard-of-care intervention, but also an idea of what the anticipated rate of therapeutic success is with your new intervention. You don't mention that anywhere. It's kind of important.

    Last, I don't follow you when you say, "get the results of the intervention on maximum possible between 25-50 and then get an estimate of the outcome". Maybe, instead of that, try something like the following:

    1. for a particular sample size, simulate a number (hundreds to thousands) of clinical studies of the new intervention under your hypothesized rate of therapeutic success for it,

    2. for each study, compute the lower confidence bound of the two-tail 95% confidence interval for therapeutic success rate,

    3. compare the lower bound computed in each study to the "historical control" rate minus the margin of noninferiority (3% in your case),

    4. compute the proportion of studies in which the lower bound exceeds that threshold--that's the power for the given sample size.

    For a power analysis, you'd repeat that for a range of particular sample sizes (from 25 to 50 in your case) and either list them and their corresponding power estimates, or plot the power estimates against the sample sizes.

    I show code for a simple simulation study for your margin of noninferiority and range of sample sizes. It uses Stata's simulate command. Because you don't give your "historical control" estimated rate of good patient outcome under standard-of-care intervention, and don't mention the anticipated rate of the new intervention's therapeutic success, I've plugged in a couple of arbitrary values (70% and 90%) for illustration.
    Code:
    clear *
    
    set seed `=strreverse("1487270")'
    
    program define simem, rclass
        version 15.1
        syntax , [n(integer 50) pi0(real 0.70) Delta(real 0.03)] pi1(real)
    
        tempname good_outcomes
        scalar define `good_outcomes' = rbinomial(`n', `pi1')
    
        cii proportions `n' `=`good_outcomes'', exact
        return scalar success = (`pi0' - `delta') < r(lb)
    end
    
    // Power analysis (list)
    tempname PowerAnalysis
    forvalues n = 25(5)50 {
        quietly simulate success = r(success), reps(3000) nodots: simem , n(`n') pi1(0.90)
        summarize success, meanonly
        display in smcl as text "N = `n'", "Power (lower bound of two-sided CI) = " %04.2f r(mean)
        matrix define `PowerAnalysis' = nullmat(`PowerAnalysis') \ (`n', r(mean))
    }
    
    // Power Analysis (graphical)
    drop _all
    matrix colnames `PowerAnalysis' = N Power
    quietly svmat double `PowerAnalysis', names(col)
    graph twoway line Power N, lcolor(black) ///
        ylabel( , angle(horizontal) nogrid) ///
            yline(0.9, lcolor(black) lpattern(dash))
    
    exit

    Comment


    • #3
      Thanks Joseph.
      MY apologies, I did not word my question properly.


      I am trying to assess sample size via bootstrap analysis.
      I was trying to assess sample size for a bootstrap analysis, not assess power by simulation for a given range of sample sizes

      I am limited in being able to recruit 25 to 50 patients. The lesser the better. The required sample size is of the order of 150+ and I cant afford it. So I am planning to get the outcomes for (lets say) 30 patients. After I get the outcome for 30 patients, which is inadequate, I plan to conduct a bootstrap analysis (1000) to get a distribution of estimates of outcome. I plan to use this distribution and its confidence interval to compare and evaluate non-inferiority with historical controls.
      Is this approach correct?
      Is 30 adequate sample size for bootstrap analysis to get valid estimates. For example I know I cant use just 4 patients and run 1000 samples with replacement to get valid estimates. Bootstrap assumes sample to be the population from where its drawing the samples. So is there method to estimate an adequate sample size for bootstrap to be valid.
      For my study, if I can use 25 instead of 30 it would be better, but does that make the analysis less valid.

      historical 65%, expected 70%, delta for non-inferiority 3%.



      Comment


      • #4
        Originally posted by ashar ata View Post
        The required sample size is of the order of 150+ and I cant afford it.
        Maybe you can conduct a study in the thirty patients, summarize the results (in a straightforward and conventional manner), and discuss.

        If the results of the feasibility study are promising, and if the benefit of an additional intervention (that would be worse than the current by no more than 3%) to public health warrants it, then you can then argue for funding that is adequate for the required sample size in a follow-on "pivotal" study.

        Comment

        Working...
        X