Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Reliability analysis - Doubt on Cronbach Alpha for Rosenberg Scale of Self Esteem (Psychometric Test)

    Dear all,

    The problem I am facing is with respect to a psychometric survey - Rosenberg Scale of self-esteem. It is a simple 10 item uni-dimensional scale with responses on a likert of 1-4. For any details about the scale and the questionnaire, please refer to: http://fetzer.org/sites/default/file...ELF-ESTEEM.pdf.

    When I analyzed the reliability of this data using Cronbach Alpha measure of internal consistency, I observed that the output for reliability shows negative signs for all questions that are supposed to have positive signs and vice-versa.

    Positive Sign: Questions that are worded positively Eg. On the whole, I am satisfied with myself.
    Negative Sign: Questions that are phrased negatively Eg. At times, I think I am no good at all.
    We always reverse the negatively scored questions to arrive at the final score for a psychometric test.


    As per the measures of Rosenberg Scale:
    1. Questions 2,5,6,8,9 are supposed to be negative questions (given in theory).
    2. Questions 1,3,4,7,10 are supposed to be positive questions.

    The output of the data is given below. It is giving us the opposite sign for each question except Question 7.
    Item Observations Sign item-test item-rest average inter-item covariance alpha
    q1 182 - 0.2794 0.0464 0.0859823 0.5334
    q2 179 + 0.4809 0.2415 0.0662029 0.4695
    q3 181 - 0.4171 0.2237 0.0727261 0.4812
    q4 182 - 0.4401 0.2438 0.0723516 0.4805
    q5 181 + 0.498 0.2605 0.0629312 0.4549
    q6 178 + 0.5823 0.368 0.0559048 0.4235
    q7 181 + 0.1968 -0.0109 0.0910891 0.5435
    q8 181 + 0.2166 0.0152 0.0909171 0.5424
    q9 182 + 0.6808 0.4973 0.0451169 0.3674
    q10 182 - 0.3474 0.1539 0.0768235 0.4952
    Test scale 0.0719981 0.5104

    Could you please suggest any possible explanation to these results ?

    ​Thanks

  • #2
    As long as the differences in the sign are consistent (e.g., all positive are negative or vice versa) it doesn't matter. The sign is there as an indication that in order for the response sets to all have the same interpretation (e.g., an increase in the value of _ corresponds to an increase in what is being measured) there are response sets that should use a reversed order. Were the results scored correctly? Also, even after summing the scores across items, they are still ordinal, not continuous as the link you referred to would suggest (summing does not change the underlying measurement scale). In general, however, the reliability is pretty bad for something that is established. It might be better to look for published equating constants and use those to scale your results and/or correct/adjust for possible parameter drift that may be observed in your sample.

    Comment


    • #3
      Thanks a lot for your response. When I was examining the data I also thought the same thing but there is one question i.e. Q7 which is recording the sign it is expected to record which is what complicates the analysis. I have checked the scoring several times and the questionnaire as well. One more question is that how much can reliability be influenced by the level of comprehension of our target population ?


      Comment


      • #4
        Originally posted by Shivani Arun View Post
        Thanks a lot for your response. When I was examining the data I also thought the same thing but there is one question i.e. Q7 which is recording the sign it is expected to record which is what complicates the analysis. I have checked the scoring several times and the questionnaire as well. One more question is that how much can reliability be influenced by the level of comprehension of our target population ?
        I don't have any empirical evidence but I would assume respondent interpretation of the item stem could have an extremely large effect on the reliability of the measure. If the respondents are all responding the same way to something they perceive differently what would the item be measuring? If the items still fit a unidimensional model, you may want to consider using IRT and looking at DIF based on which ever demographic variables you believe identify the group(s) that are interpreting the question differently. It may not detect DIF, but particularly with a 3PL case it might show up with the c parameter (pseudo guessing) functioning differently across the groups when the level of theta is fixed between them.

        Comment


        • #5
          Good morning, my questions is similar. I'm doing a reliability analysis and I have positive and negative items in my survey. My question is how to interpret the signs reported by the items, that is, to know when I have to revert items based on these resulting signs.

          Ex.
          Positive items: item1, item2, item3, item4, item5 item6 item9 item11
          Negative item: item 7 item8 item10 item12

          Annex the result obtained for better support.

          in this case I still do not reverse the negative items, I have not done the collection or database, so since I do not have information about it I would like to know how is the address of the item.

          Click image for larger version

Name:	reliability.png
Views:	3
Size:	17.1 KB
ID:	1435112

          Thank you for your answers and your time.

          Comment

          Working...
          X