Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sample size calculations for a non-inferiority trial

    Hi Listers,

    I need to calculate the sample necessary for a non-inferiority trial where we are comparing effectiveness of using standard of care vs. a new approach to detect cancer. We know the SoC can detect 80% and we would like to show the new approach is non-inferior using a 10% margin. Participants will undergo both assessments.

    IV: binary (yes/no cancer)
    DV: binary: SoC vs. new care

    Most online information for non-inferiority trials is to compare 2 samples but we only have 1.

    I could not find any Stata command to estimate sample size for a non-inferiority trial in one sample. I was advised this could be done using simulations but I am unsure how, I have looked into powersim but I am unsure how to build in the predictor within the cov1 approach (and unsure on how to set it up in general).

    powersim , ///
    b(0.1) ///
    alpha(0.05) ///
    pos(1) ///
    sample(50(50)400) ///
    nreps(500) ///
    family(binomial) ///
    link(logit) ///
    cov1(x1 -0.25 normal 0 1) /// /* how is it defined*/
    inside /// /*do i need this*/
    dofile(psim_dofile, replace) : /// /*what should this be?*/
    glm y i.xi, family(binomial) link(logit)

    Any help wold be much appreciated!


  • #2
    It can be done by comparing the lower confidence bound of a risk difference of paired observations. You can get that at least asymptotically using the official -mcc- command in a simulation context..

    But in order to simulate diagnostic findings for the two tests in a representative population, you might want to consider a couple of more parameters to either fix as assumptions or course through in simulations, for example, (i) what the prevalence of cancer is in the population undergoing the diagnostic test comparison (if screening, then it will be lower than if the experimental diagnostic test is to be used in a follow-up confirmatory manner) and (ii) what the specificity values (or rates of false-positive findings) are for the two diagnostic tests.

    Comment


    • #3
      Dear Joseph,

      Thank you for your reply. I was hoping you may be able to provide some more information on how to set this up.

      Assuming SoC detects 80% of cancers and that we set an inferiority margin of 10%, based on 100 patients I could assume that 20 are missed and 80 are detected in SoC while 30 are missed by the new test: mcci 20 80 30 70 , level(95)


      | Controls |
      Cases | Exposed Unexposed | Total
      -----------------+------------------------+------------
      Exposed | 80 20 | 100
      Unexposed | 70 30 | 100
      -----------------+------------------------+------------
      Total | 150 50 | 200

      McNemar's chi2(1) = 27.78 Prob > chi2 = 0.0000
      Exact McNemar significance probability = 0.0000

      Proportion with factor
      Cases .5
      Controls .75 [95% Conf. Interval]
      --------- --------------------
      difference -.25 -.3412718 -.1587282
      ratio .6666667 .572763 .7759657
      rel. diff. -1 -1.525914 -.4740865

      odds ratio .2857143 .1646011 .4752022 (exact)


      1. I am unsure (based on the output) whether this is the correct way to set up the mcci command

      2. This is only one scenario but the values could vary and I could do set up a loop to vary these values in a more systematic way.

      3. Is the idea that I could run this on 100 (?) variation and see how many times the low bound of the estimated CI actually includes the non-inferiority margin (0.20) to calculate power? I am interested in the difference 95%CIs, correct?

      Apologies if these are very basic questions but it is the first time I attempt simulations for sample size.





      Comment

      Working...
      X