Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Comparing 3 correlation coefficients

    I have a group of about 1300 patients who had had disease 1 (subgroup 1), disease 2 (subgroup 2), or disease 3 (subgroup 3).

    I have correlated certain biomarkers with their kidney function in the whole group (N=1300), and then in each of these subgroups (N = circa 400 in each subgroup)

    e.g. Fibrinogen & GFR


    I would like to work out if there is a statistical difference for these correlations across the 3 disease subgroups (R1 vs R2 vs R3). Is there a way to do this with STATA or SPSS

    The only online calculators that I've found for fisher's z transformation only seem to allow comparison between two correlation coefficients (as opposed to 3 which is what I need).

    I've also found this link: http://home.ubalt.edu/ntsbarsh/busin.../MultiCorr.htm but I'm not sure if this calculator is accurate?

    Does anyone have any suggestions?

    Dearbhla

  • #2
    Hello Dearbhla Kelly. Syntax file #5 on this web-page has some SPSS code you could tinker with to get the desired test. But note that the test that code carries out is just the standard Chi-square test of heterogeneity that meta-analysts use. So I think you'd likely find it a lot simpler to do this in Stata. Create a small data set with variables group, r and n. Then compute Zr and SE, and perform meta-analysis to obtain the Chi-square test of heterogeneity (i.e., the Q-value). Here's an example you can tinker with.

    Code:
    // Create a small data set with group, r and n.
    // Replace r and n values with your own.
    clear *
    input byte group r n
    1 .32 405
    2 .35 402
    3 .29 398
    end
    
    generate double Zr = atanh(r)
    generate double SE = sqrt(1/(n-3))
    
    // If you have version 16, do this:
    
    meta set Zr SE
    meta summarize
    
    // If you do not have version 16:
    // findit metan // If you need to install -metan- program
    metan Zr SE

    HTH.
    --
    Bruce Weaver
    Email: [email protected]
    Web: http://sites.google.com/a/lakeheadu.ca/bweaver/
    Version: Stata/MP 18.0 (Windows)

    Comment


    • #3
      Thank you - that is really helpful!

      Zr is the Fisher r to Z transformation right?

      Comment


      • #4
        Yes, Zr = Fisher's r-to-Z, which is really the inverse hyperbolic tangent--atanh(r) in Stata.
        --
        Bruce Weaver
        Email: [email protected]
        Web: http://sites.google.com/a/lakeheadu.ca/bweaver/
        Version: Stata/MP 18.0 (Windows)

        Comment


        • #5
          Some background on comparing Pearson correlations:

          https://journals.plos.org/plosone/ar...l.pone.0121945

          and Spearman correlations:

          https://www.omicsonline.org/open-acc....php?aid=54592

          -Dave

          Comment


          • #6
            There's a pretty strong critique of the validity of the various asymptotic approaches to testing correlation coefficients when the underlying distributions are not normal (as they usually aren't). See:

            Berry, K. J., & Mielke Jr, P. W. (2000). A Monte Carlo investigation of the Fisher Z transformation for normal and nonnormal distributions. Psychological Reports, 87(3_suppl), 1101-1114. https://journals.sagepub.com/doi/abs...000.87.3f.1101

            I'd think instead about taking advantage of the ease with which one can use -permute- or -bootstrap- in Stata to produce randomization tests or CIs. One could also have the freedom to define a test statistic of interest (e.g., sum of the absolute percentage differences of in the correlations across the three groups.) It would be interesting to compare the results on some actual data.

            Comment


            • #7
              The second citation above is the only paper I know for testing and providing CIs for Spearman correlations. Only SAS macros are available for the methods though. Thanks for the reference. Here are two I looked at recently.

              Testing the Significance of a Correlation With Nonnormal Data:
              Comparison of Pearson, Spearman, Transformation,
              and Resampling Approaches
              Anthony J. Bishara and James B. Hittner
              Psychological Methods 2012 American Psychological Association
              2012, Vol. 17, No. 3, 399–417

              Confidence intervals for correlations when data are not normal
              Anthony J. Bishara & James B. Hittner
              Behav Res (2017) 49:294–309
              DOI 10.3758/s13428-016-0702-8

              Comment


              • #8
                Does it make sense to first transform the two variables into fractional ranks (by group), and then use OLS and interaction terms to test significance of difference in rank correlation between groups? I read in Chetty et al. (2014) that the OLS regression coefficient between two uniformly distributed variables (fractional rank variables are uniformly distributed) is equal to (or can be interpreted equally as) the correlation coefficient between the two variables.

                Code:
                reg rank2 c.rank1##i.groupvar

                Comment

                Working...
                X