Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zero event meta analysis with ipdmetan

    Hey everyone, I'm trying to run a two-stage meta-analysis to generate a forest plot with individual participant data using ipdmetan to plot the odds of an event across several trials. Unfortunately, for the majority of trials, in one of the subgroups of interest, there were no events. Is there any way ipdmetan can adjust for zero-event meta-analysis to include these trials in the analysis? I can only see an option "keepall" to include the excluded studies in the forest plot. Reading the literature for zero event meta-analysis, some propose to put in 0.5 events in the subgroup of interest. Is there any way ipdmetan can handle this? Thank you!

  • #2
    Any help would be appreciated!

    Comment


    • #3
      I don't see anything in the help file, which can be viewed by executing this command:

      Code:
      view "http://fmwww.bc.edu/repec/bocode/i/ipdmetan.sthlp"
      Have you tried contacting David Fisher, the author of the package?
      --
      Bruce Weaver
      Email: [email protected]
      Version: Stata/MP 19.5 (Windows)

      Comment


      • #4
        I haven’t used ipdmetan enough to make a concrete suggestion. It does seem to be the most flexible suite out there, and the only one I know of that handles IPD data. I should think that there are binomial based methods within the package. I think -metan- is integrated into -ipdmetan- and I know that can handle zero event studies (in general), so you'll just need to poke around the help file and documentation more.

        The literature on the topic of zero event studies for meta-analysis is fairly mature. It’s usually not an important issue when deciding between normal or binomial based methods, or aggregate of IPD methods, when the number of these studies and subjects within are a small fraction of the overall set. However, in your case you seem to have a lot of these studies, and it becomes problematic, because the results can be highly sensitive the the model and any ad hoc adjustments (such as adding 0.5 events -- the use of which is controversial). You may not find any satisfactory aggregate data model though, if your estimates lie near the "edge" of the parameter space (especially for tau2), leading to model non-convergence. You may be forced to simplify to a fixed-effects only type of model.

        On the simpler side, you may consider not performing any contrast within this subgroup for the lack of events, if it appears there are no, or few, events in each group. If you proceed, you should conduct and present some kind of sensitivity analysis for the modeling approach and how that affects your estimates, confidence intervals and conclusions.

        On the more advanced side, you may want to do the modeling directly, but you'll be working with (non)-linear mixed models. You may also wish to consider Bayesian methods for your IPD meta-analysis with reasonable priors to explore what the resulting uncertainty will be in your effect estimates.

        In short, there are no good options on this territory.

        Comment


        • #5
          Leonardo Guizzetti summarizes the situation well, I think.

          Although I am the author of the ipdmetan package, my own opinion is that a two-stage approach (which is what ipdmetan implements) is probably not the best option here. Firstly, I think it's important to ask the question: why do you wish to analyse IPD (whether two-stage or one-stage) rather than aggregate data? Among the better reasons for doing so are:

          (a) to obtain additional data not provided in the original study reports;
          (b) to adjust for covariates;
          (c) to investigate individual-level subgroups (and/or treatment-covariate interactions);
          (d) to "fix up" the modelling somehow, e.g. imputing missing data, account for non-proportional hazards in survival data ... or, potentially -- as here -- overcome issues arising from sparse data.

          However, I find it hard to understand why a researcher would go to the trouble of collecting IPD specifically to overcome sparse data without also considering what their solution would be (i.e. how to make use of the collected IPD). So, let's assume that Pete Pan has access to this IPD, but does not have in mind any of the specific reasons given above. (That is, they are not obliged a priori to analyse their IPD in any particular way.)

          My preference would be for one of two approaches:

          1. An aggregate-data approach suitable for sparse data, such as Mantel-Haenszel methods or Peto's odds ratio. That is, you can collapse (literally, using the Stata command of that name) the IPD into aggregate data of the form of counts a, b, c, d representing events and non-events under treatment, and events and non-events under control. Then use metan rather than ipdmetan. The metan help file clearly gives options for such count data, under help metan_binary.

          2. A (fixed-effects) one-stage model, using logit (or possibly a penalized alternative e.g. firthlogit or penlogit). This might give a better fit to the sparse data, because the likelihood is based upon all study data simultaneously rather than each study separately. The main negative to this approach is that you lose the comforting meta-analysis outputs such as heterogeneity tests and forest plots. But on the positive side, you obtain what (presumably) you primarily want: a valid pooled effect size and confidence interval.

          I hope that between us, Leonardo and I have been helpful!

          Best wishes,
          David.

          Comment


          • #6
            Originally posted by David Fisher View Post
            Leonardo Guizzetti summarizes the situation well, I think.

            Although I am the author of the ipdmetan package, my own opinion is that a two-stage approach (which is what ipdmetan implements) is probably not the best option here. Firstly, I think it's important to ask the question: why do you wish to analyse IPD (whether two-stage or one-stage) rather than aggregate data? Among the better reasons for doing so are:

            (a) to obtain additional data not provided in the original study reports;
            (b) to adjust for covariates;
            (c) to investigate individual-level subgroups (and/or treatment-covariate interactions);
            (d) to "fix up" the modelling somehow, e.g. imputing missing data, account for non-proportional hazards in survival data ... or, potentially -- as here -- overcome issues arising from sparse data.

            However, I find it hard to understand why a researcher would go to the trouble of collecting IPD specifically to overcome sparse data without also considering what their solution would be (i.e. how to make use of the collected IPD). So, let's assume that Pete Pan has access to this IPD, but does not have in mind any of the specific reasons given above. (That is, they are not obliged a priori to analyse their IPD in any particular way.)

            My preference would be for one of two approaches:

            1. An aggregate-data approach suitable for sparse data, such as Mantel-Haenszel methods or Peto's odds ratio. That is, you can collapse (literally, using the Stata command of that name) the IPD into aggregate data of the form of counts a, b, c, d representing events and non-events under treatment, and events and non-events under control. Then use metan rather than ipdmetan. The metan help file clearly gives options for such count data, under help metan_binary.

            2. A (fixed-effects) one-stage model, using logit (or possibly a penalized alternative e.g. firthlogit or penlogit). This might give a better fit to the sparse data, because the likelihood is based upon all study data simultaneously rather than each study separately. The main negative to this approach is that you lose the comforting meta-analysis outputs such as heterogeneity tests and forest plots. But on the positive side, you obtain what (presumably) you primarily want: a valid pooled effect size and confidence interval.

            I hope that between us, Leonardo and I have been helpful!

            Best wishes,
            David.
            Extremely helpful, thank you!! I'm quite a novice with IPD so I may have the consult a statistician regarding your preferences but hopefully it will come together. Thanks again to you both.

            Comment


            • #7
              David Fisher or anyone else,

              I approached the work-around here with the M-H approach. This might be a silly question, but using this approach, is there any way to adjust for other covariates? For example, I have my events aggregated and collapsed to a, b, c, d per trial. It would seem that adjusting for covariates at trial level data would not be possible. Would this simply be a limitation of the M-H model or is there some way to go about this? For example, some covariates I'm looking to adjust for in my analysis would be age and sex.

              I also did a simple fixed effects approach which should allow me to adjust for covariates using IPD data. Regarding the fixed effects, would it be appropriate to use a mixed effects here instead of fixed effects to allow for variable intercepts at the study level to account for between-study heterogeneity? The results are very similar but just want to pick the most sound approach.

              Comment


              • #8
                Pete Pan? is this a real name?

                Anway...

                Have you considered an one-stage mixed-effects model via melogit? It handles well zero events. If you have age and sex at the participant level, melogit would be my first choice. However, as David pointed out, you will lose the typical meta-analysis outputs.


                Comment

                Working...
                X