Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The theory behind meta-analysis (difference in mean)

    Hi,

    I just read the basic theory (introductory) about meta-analysis. Stata could do meta-analysis by -metan-. The meta-analysis uses some methods to combine the individual studies. Of course, fixed-effect and random-effects models are different. With each individual study, there is its observed mean, the standard deviation of observations, and the sample size. With these three data, the meta-analysis could compute a summary effect size (the summary difference in mean), and the standard deviation of the summary difference in mean.

    However, it's summary effect size, not the true effect size. Why the summary effect size could be used to estimate the true effect size? The book I read did not tell the theory behind this procedure.

    Tom

    PS: Use some online data, I did a simple meta-analysis, both in fixed-effect and random-effects models, respectively.

    Code:
    . metan samplesizeoftreatedgroup meanintreatedgroup sdintreatedgroup samplesizeofcontrolgroup meanincontrolgroup sdincontrolgroup, fixed
    
               Study     |     SMD   [95% Conf. Interval]     % Weight
    ---------------------+---------------------------------------------------
    1                    |  0.095      -0.263     0.453         12.39
    2                    |  0.279      -0.067     0.624         13.30
    3                    |  0.370      -0.072     0.812          8.13
    4                    |  0.666       0.464     0.867         39.16
    5                    |  0.466       0.057     0.874          9.53
    6                    |  0.186      -0.115     0.487         17.49
    ---------------------+---------------------------------------------------
    I-V pooled SMD       |  0.417       0.291     0.543        100.00
    ---------------------+---------------------------------------------------
    
      Heterogeneity chi-squared =  11.93 (d.f. = 5) p = 0.036
      I-squared (variation in SMD attributable to heterogeneity) =  58.1%
    
      Test of SMD=0 : z=   6.48 p = 0.000
    
    . metan samplesizeoftreatedgroup meanintreatedgroup sdintreatedgroup samplesizeofcontrolgroup meanincontrolgroup sdincontrolgroup, random
    
               Study     |     SMD   [95% Conf. Interval]     % Weight
    ---------------------+---------------------------------------------------
    1                    |  0.095      -0.263     0.453         15.75
    2                    |  0.279      -0.067     0.624         16.28
    3                    |  0.370      -0.072     0.812         12.63
    4                    |  0.666       0.464     0.867         23.26
    5                    |  0.466       0.057     0.874         13.80
    6                    |  0.186      -0.115     0.487         18.27
    ---------------------+---------------------------------------------------
    D+L pooled SMD       |  0.360       0.153     0.567        100.00
    ---------------------+---------------------------------------------------
    
      Heterogeneity chi-squared =  11.93 (d.f. = 5) p = 0.036
      I-squared (variation in SMD attributable to heterogeneity) =  58.1%
      Estimate of between-study variance Tau-squared =  0.0373
    
      Test of SMD=0 : z=   3.41 p = 0.001
    
    .

  • #2
    You are asking about basic issues in meta analysis. If the book you read didn't answer this question, read another. I don't the the meta analysis jargon, but we always estimate a parameter that we hope approximates a true value.

    Comment


    • #3
      With all due respect, the question is a bit vague, and I may be mis-interpreting it. I think the original post is asking how we can justify using the pooled standardized mean difference from a meta-analysis to estimate the true effect size.

      I think the answer is that yes, the pooled SMD can be an estimator of the true effect size, but it's a lot more complicated than you think. Don't take meta-analyses as the ultimate in strength of evidence.

      Interventions can vary in a lot of ways. Take drugs, which you would think are quite black and white and amenable to meta analysis. That's not the case: even if we had a pool of studies of one particular selective serotonin reuptake inhibitor on depression, e.g. effexor extended release - the brand name and not the generic, and note the extended release version - they might still differ on the dosage, the timing of the intervention (e.g. when in the course of a major depressive episode does the patient get treated, how long the patient gets treated for), who administered the intervention (e.g. primary care MDs, primary care mid level practitioners like nurse practitioners, psychiatrists), the comparator (is effexor XR being compared to usual care? What meds were given in usual care? Or was it some other medication - if so, another SSRI? A tricyclic antidepressant?), and a bunch of other dimensions.

      The difference in the estimates between the fixed effects and random effects meta analyses are an illustration of the above. The random effects meta analyses finds evidence that the standardized mean differences differed enough that you have to wonder if the intervention being compared was heterogeneous. If the intervention was heterogeneous, then what caused it to be effective? Were the interventions or the study populations too heterogeneous to legitimately compare? Here, the random effects meta analyses estimated that 58.1% of the variation in the SMDs could be attributable to between-study heterogeneity, i.e. differences in the patients studied or in the flavor of the intervention administered, rather than just being attributable to the same intervention and random sampling variation. One analogy I have heard is that you can treat the studies themselves as a random sample of possible studies. If the studies are too different from one another, then there is not really one true effect size.

      I guess my advice would be that if you are doing meta analysis to estimate the true effect size of a particular intervention, the studies have to be comparable. The interventions need to be substantively similar enough to compare. If you have between-study heterogeneity, the standard practice in systematic reviews for healthcare interventions is to attempt to explain it through meta-regression (i.e. try to isolate reported intervention characteristics or patient characteristics and incorporate that in a regression). If you are doing meta analysis to get a parameter to feed into a decision model, that's a different use of meta-analysis, and more heterogeneity is completely legitimate if you understand what you are doing.
      Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

      When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

      Comment


      • #4
        Thanks, Weiwen

        In fixed-effect model, the summary effect size could be used to estimate the true effect size and the confidence interval is the confidence interval of true effect size. However, what about the random-effects model? What's the meaning of the confidence interval? It looks like that the confidence interval of random-effects model is not the confidence interval of true effect size(s). In the random-effects model, there is not common effect size. So what is the meaning of the summary effect size?

        Tom

        Comment


        • #5
          Your doubts relate to the core-knowledge as well as the backbone of any meta-analysis, as remarked by Phil in #2. Please don't proceed without reading about it carefully in a decent book. You said the book you read "did not tell the theory behind this procedure". That is appalling and made be doubtful whether it was really a book or just a hand-out with applied statistics, for I have never found a book on meta-analysis which neglected to comment - oftentimes thoroughly - on this topic. Unfortunately, weird as it may, whatever the text (book or not) you picked, you must have chosen the wrong one.
          Last edited by Marcos Almeida; 20 Dec 2017, 02:50.
          Best regards,

          Marcos

          Comment


          • #6
            The book I read is named 'Introduction to Meta-Analysis' by Michael Borenstein and Larry V. Hedges, published in 2009. Or would you please suggest me some good textbooks?

            Tom

            Comment


            • #7
              Tom:
              you may want to take a look at https://www.wiley.com/en-it/Methods+...-9780471490661
              Kind regards,
              Carlo
              (Stata 19.0)

              Comment


              • #8
                Originally posted by Tom Hsiung View Post
                Thanks, Weiwen

                In fixed-effect model, the summary effect size could be used to estimate the true effect size and the confidence interval is the confidence interval of true effect size. However, what about the random-effects model? What's the meaning of the confidence interval? It looks like that the confidence interval of random-effects model is not the confidence interval of true effect size(s). In the random-effects model, there is not common effect size. So what is the meaning of the summary effect size?

                Tom
                https://www.ahrq.gov/research/findin...iew/index.html
                Per Riley et al, in random effects meta-analysis, you are assuming that the true treatment effect does vary randomly from study to study. It isn't just because of sampling variability. If each study had infinite size, the true treatment effects would still vary. And, per chapter 9.5.4 of the Cochrane Handbook for conducting systematic reviews, the pooled estimate from a random effects meta-analysis describes the average of that heterogeneous set of treatment effects (and the corresponding confidence interval describes our uncertainty as to where that average lies).

                If that explanation didn't make sense, or even if it did make sense, I'd definitely urge you to consult someone with expertise in this field in person. If you're in the US and you are at a university with an Evidence-based Practice Center, I'd consult with them - and if they contradict anything I said, please ignore me.
                Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

                When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

                Comment


                • #9
                  If each study had infinite size, the true treatment effects would still vary
                  If the number of studies is infinitive, the confidence interval for average effect size would disappear, right?

                  And I took a look at the meta-regression, I think meta-regression would be more appropriate than random-effects model. Because it is not like that the true effect size would vary randomly from study to study.

                  Tom
                  Last edited by Tom Hsiung; 20 Dec 2017, 09:12.

                  Comment


                  • #10
                    Originally posted by Tom Hsiung View Post
                    If the number of studies is infinitive, the confidence interval for average effect size would disappear, right?
                    That's not what I said. The quote you responded to was me saying that you have 10 hypothetical studies that are heterogeneous. Even if each one had infinite sample size, you still have heterogeneity in the treatment effects, so you still have a mean and confidence interval for the treatment effect. For example, say the question of interest is the effect of antidepressants in general on the PHQ-9 depression symptom score. Here, you not only have different drug classes, but different dosages, titration regimens, different providers, different patient populations, maybe different countries and hence different translations of the PHQ-9, and different ways that patients interpret the PHQ-9 questions, etc. In this case, there is not one true effect size. There are multiple true effect sizes.

                    I am not sure what happens if you have an infinite number of heterogeneous studies. Ask a real statistician.

                    Originally posted by Tom Hsiung View Post
                    And I took a look at the meta-regression, I think meta-regression would be more appropriate than random-effects model. Because it is not like that the true effect size would vary randomly from study to study.

                    Tom
                    I have to respectfully but strongly disagree. In the real world, you almost always have studies that vary in the parameters you can measure (e.g. what country, what drug, what dosage, what timing, etc). You are often dealing with complex interventions that are inherently heterogeneous - the example in the paper I linked was inpatient rehabilitation in geriatric patients. The features of each intervention are almost certainly varied. Moreover, the interventions will also vary in ways that you can't measure. For example, do you know how aggressively each intervention was implemented, or how skilled the personnel were? Do you know how usual care varied from site to site? You very likely can't measure those things.

                    So yes, the true effect size often does vary from study to study. This is a critically important thing that I think users need to understand before they do meta-analysis, let alone meta-regression. Personally, because I am more used to complex interventions, and because I learned from skeptics, I usually assume that studies are heterogeneous, and the true effect size does vary. If I came across a meta-analysis of 10 trials of, for example, the effect of venlafaxine XR (generic Effexor, an SSRI) on PHQ-9 scores in U.S. primary care patients at 6 months after treatment initiation, then this is getting close to a situation where fixed effects meta analysis is close to the truth. But note that this example is a bit artificial - the question is usually more like the effect of antidepressants on any reported symptom score, and you still have not shown how the method for titrating the dosage varied, whether it was all native English speakers using the English PHQ-9, was the PHQ-9 always administered by independent parties, etc.

                    Also, I am pretty sure that in meta-regression, you can choose to do random effects meta-regression, and you probably should unless you have good reason. So, meta-regression is not more appropriate than a plain random effects meta-analysis. It is more like something that you can do after a random effects meta-analysis if you have good enough data, and that you should probably do if you have enough good data on intervention and patient characteristics and you observed unexplained heterogeneity.

                    All of this is why I tend to recommend consulting an expert in person.
                    Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

                    When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

                    Comment


                    • #11
                      This recent paper is closely related to this thread and might be of interest to some of you.

                      Rice, K., Higgins, J., & Lumley, T. (2017). A re‐evaluation of fixed effect (s) meta‐analysis. Journal of the Royal Statistical Society: Series A (Statistics in Society).

                      Comment


                      • #12
                        A lot to learn, well!

                        Tom

                        Comment

                        Working...
                        X