Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Handling Rare Events in Meta-Analysis

    Hi All,

    Which method is best suited for handling meta-analysis of rare events?
    Considering a pool of 300 studies encompassing 50,000+ patients, a rare event (E) occurs E/N times (where N is the cohort size of the index study). However, in the majority (80-85%) of studies, this is reported as 0/N.
    The overall incidence is roughly 30/50,000 (0.05%).

    In these instances, would it be better to use generalised linear mixed model (GLMM) or the Freeman-Tukey double arcsine (FTDA) transformation, to obtain an OR for the rare event? I know that GLMMs can have convergence issues when a large proportion of the observations are 0/N, but also that back-transforming from the FTDA can also introduce biases.

    I have discarded exact methods and continuity corrections due to the large sample size and biases, respectively.

    Thank you for your help!
Working...
X