Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ANOVA post-hoc comparison of one level to all others

    I am working with a data set comprised of 15 participants rating 37 separate Likert scale questions. I would like to know if the ratings for any one question are significantly different than all of the others. For simplicity I am treating the Likert scale responses as a normally-distributed and continuous.

    I can use an ANOVA
    Code:
    anova importance question
    to test that there is an overall difference in ratings across all 37 questions. However, I am stuck with how to assess if any one question is significantly different than all others. Using
    Code:
    contrasts g.question
    will give me the difference between any one of the 37 ratings and the grand mean (I think), but I want to specifically see if any one rating is different than the combination of all others. Do either the constrasts or margins command do this? Do I need something else?

    Thanks in advance for the help.

  • #2
    If that's the central question, why not answer it using a t test?

    Comment


    • #3
      Then I would be left with 37 t-test statements and multiple comparisons issues. I am hoping there is a more straightforward way to do this.

      Comment


      • #4
        If you are saying that you do not know in advance which category to compare with all the others, then it seems that you have multiple comparison problems regardless of what you do.

        Comment


        • #5
          To make matters worse, if I understand your data, each of these 37 questions was answered by the same panel of 15 respondents. If that is correct, you cannot really use -anova importance question- because your observations are not independent, they are clustered within respondents. So your basic analysis needs to be something like -mixed importance i.question || participant:-, or consider doing a fixed-effects analysis.

          I'm not sure I understand what hypothesis about the questions you are trying to test. If the null hypothesis is that the mean importance response associated with all the questions is the same, then that is a single joint hypothesis test that you can handle with -testparm i.question, equal-. If there is a single question that you have identified in advance, then that, too can be tested with a single hypothesis test (though perhaps it is most simply done by re-running the main analysis with just a single indicator variable for that one question).

          But if you want to go through each of the 37 questions to determine whether it stands out distinctively from the others, then, as Nick Cox says, you are in the land of multiple comparison problems no matter what.

          Comment


          • #6
            Yes, I do, and I was incomplete in my first post. I am using
            Code:
            contrasts g.question, mcompare(bonferroni) pveffects
            to assess for significant differences between any one question and the grand mean of the dependent variable, and
            Code:
            pwcompare question, mcompare(bonferroni) pveffects
            to look at pairwise comparisons. But doing independent t-tests to compare any one question to the mean of all others will not allow for a Bonferroni adjustment.

            Comment


            • #7
              Well, if you're going to do 37 t-tests and you want to Bonferroni adjust them, all you have to do is multiply each t-test's p-value by 37 (and if the result is greater than 1, replace that by 1.0). The Bonferroni adjustment is the easiest of the multiple comparison adjustments.

              Comment

              Working...
              X