Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to obtain the difference with 95% CI of two pooled subgroup means in meta-analysis?

    Dear Statalist members,

    I like to perform a random-effects meta-analyis on age-sex-and-socioeconomic adjusted means with standard error of maths test results published in multiple studies.
    Some studies used Teaching method I, others used Teaching method II.
    I used the subgroup option in the - meta forestplot - command to obtain the means +/- 95%CI for the maths tests according to the different treatment methods (see figure).

    Code:
    meta set mean se, random studylabel(study) eslabel(Mean math score)
    meta forestplot, subgroup(teaching_methods)

    In my point of view, I would argue that the Cochran’s statistic for testing differences between the two subgroups suggest that there is strong evidence against the null hypothesis of homogeneity between the two subgroup means; thus one might conclude that the one teaching method is superior over the other.
    However, I am interested in the difference of the pooled means (4.86 - 3.47) so somehow quantify the effect.

    I wonder, if and if yes, how I can calculate the 95% CI for the difference of the pooled means?

    I think using the formula below to calculate the 95% CI is incorrect and the number of patients in each study would be needed.

    (x1–x2) +/- t*√((sp2/n1) + (sp2/n2))

    x1, x2: sample 1 mean, sample 2 mean
    t: the t-critical value based on the confidence level and (n1+n2-2) degrees of freedom
    sp2: pooled variance )with sp^2 = ((n1-1)s1^2 + (n2-1)s2^2) / (n1+n2-2))
    n1, n2: sample 1 size, sample 2 size


    What do you think? Can you help?

    Best wishes & thanks!
    Martin
    Attached Files
    Last edited by Martin Mueller; 11 Jul 2023, 13:19.

  • #2
    Martin, hi.

    That difference is already calculated via the test of group differences. The chi-squared value shown is actually the Z-test squared comparing the two pooled means.

    To calculate that difference yourself, you can use a simple Z test, assuming that the two means are independent:

    Code:
    Z = (mean1-mean2)/sqrt((SE1^2)+(SE2^2))
    where SE denotes standard error.

    If you compute Z^2 using decent precision for the means and SEs, the results should be virtually identical to the Q-test with 1 df shown in the graph [Q(1) = 9.05].

    Code:
    MD = mean1-mean2
    SE(MD) = sqrt((SE1^2)+(SE2^2))
    
    Upper limit = MD+invnorm(0.975)*SE(MD)
    Lower limit = MD-invnorm(0.975)*SE(MD)
    Hope this helps.
    Tiago

    Last edited by Tiago Pereira; 11 Jul 2023, 14:47.

    Comment


    • #3
      Dear Tiago

      That is absolutely perfect and works like a charm!

      Thank you very much!
      Martin

      Comment


      • #4
        Martin Mueller, I wonder if you can get the contrast you want via meta regress. Does this work?

        Code:
        meta regress i.teaching_methods
        margins teaching_methods
        --
        Bruce Weaver
        Email: [email protected]
        Version: Stata/MP 18.5 (Windows)

        Comment


        • #5
          Dear Bruce
          Thank you for your interesting suggestion. I get some estimates the way you suggested.

          And I could further use the margins command to get the difference of the effects:
          Code:
          margins, base dydx(teaching_methods)
          However, the estimates, especially the 95% CI differ from those obtained by the subgroup option with meta forestplot command:
          Code:
           meta forestplot, subgroup(teaching_methods)
          Best wishes
          Martin

          Comment


          • #6
            That's interesting, Martin. I expected them to be the same. When time permits, I may have to tinker around with some examples.

            Cheers,
            Bruce
            --
            Bruce Weaver
            Email: [email protected]
            Version: Stata/MP 18.5 (Windows)

            Comment


            • #7
              Bruce is right.

              However, the test of difference between subgroups may or may not be equivalent to the meta-regression model. The Z test (or equivalently the chi-squared with 1 df) examines the difference between two independent underlying distributions - under a fixed-effect model.

              The default is to run a random-effects meta-regression to examine the association of 1 unit change in the moderator with the dependent variable - assuming a single underlying (random-effects) distribution.

              The results will, on average, point to the same direction, but are not the same. In practice, the Z test and random-effects meta-regression model can have very different conclusions.

              Besides, the Z test can be performed with random- and fixed-effects summary estimates. It is expected to be more powerful than meta-regression when one of the subgroups has 1 or a few estimates.

              If you want to replicate the Z-test via a meta-regression model, you should use a fixed-effect meta-regression model:

              Code:
              meta set es se, fixed
              meta regress group
              The results above should be identical to the between-group difference test (for 2 subgroups).

              Hope this clarifies the issue.

              All the best,

              Tiago.
              Last edited by Tiago Pereira; 18 Jul 2023, 10:04.

              Comment


              • #8
                Originally posted by Tiago Pereira View Post
                If you want to replicate the Z-test via a meta-regression model, you should use a fixed-effect meta-regression model:

                Code:
                meta set es se, fixed
                meta regress group
                The results above should be identical to the between-group difference test (for 2 subgroups).

                Hope this clarifies the issue.

                All the best,

                Tiago.
                Well spotted re fixed vs random effects, Tiago Pereira! I should have thought of that.

                It seems to me that if one is using a random effects model for the meta-analysis, the contrast obtained via -meta regress- (i.e., using the same random effects model) is the one to use. YMMV.
                --
                Bruce Weaver
                Email: [email protected]
                Version: Stata/MP 18.5 (Windows)

                Comment


                • #9
                  Thank you very much, Tiago and Bruce!
                  That really helped and clarified a lot.

                  Comment

                  Working...
                  X