Dear All,
I am working with cross-section data from the latest round 9(2018) of the European Social Survey and I want to test measurement invariance of the Human Value Scale across 28 countries by conducting Multigroup Confirmatory Factor Analysis (MGCFA) using sem.
This has been done e.g. by Davidov 2010, 2008 or Davidov, Schmidt and Schwartz 20081 and mostly at least metric invariance has been confirmed.
However my empirical findings could show metric invariance only for 2 of the 10 value dimensions. But as I am completely inexperienced in testing measurement invariance, I wanted to ask whether I have proceeded correctly.
I am using sem command in Stata 16.1 and followed the procedure outlined at UCLA and Stata blog.
----------------------- copy starting from the next line -----------------------
------------------ copy up to and including the previous line ------------------
As an example, this is the syntax for the value dimension “Power” and “Achievement” and their comparison via loglikelihood-test:
/*Value Dimension Power - PO*/
. /*Value Dimension Achievement - AC*/
Do the results indicate that metric invariance can be proved over countries for the value dimension achievement? Is the syntax correct?
Thanks in advance and best regards
Amelie
-------------------------------------------------
1
I am working with cross-section data from the latest round 9(2018) of the European Social Survey and I want to test measurement invariance of the Human Value Scale across 28 countries by conducting Multigroup Confirmatory Factor Analysis (MGCFA) using sem.
This has been done e.g. by Davidov 2010, 2008 or Davidov, Schmidt and Schwartz 20081 and mostly at least metric invariance has been confirmed.
However my empirical findings could show metric invariance only for 2 of the 10 value dimensions. But as I am completely inexperienced in testing measurement invariance, I wanted to ask whether I have proceeded correctly.
I am using sem command in Stata 16.1 and followed the procedure outlined at UCLA and Stata blog.
----------------------- copy starting from the next line -----------------------
Code:
* Example generated by -dataex-. To install: ssc install dataex clear input byte(ipshabt ipsuces iplylfr iphlppl) long country 2 2 2 2 1 3 4 2 3 1 3 3 3 2 1 1 1 1 1 1 4 6 2 2 1 2 3 1 1 1 2 3 3 3 1 4 4 1 2 1 1 5 2 1 1 3 3 3 2 1 6 3 4 6 1 6 6 2 2 1 1 2 1 1 1 6 5 1 2 1 2 4 1 1 1 3 3 1 1 1 2 3 2 2 1 2 3 2 1 1 1 2 2 2 1 3 3 2 2 1 end
As an example, this is the syntax for the value dimension “Power” and “Achievement” and their comparison via loglikelihood-test:
/*Value Dimension Power - PO*/
Code:
. /*Configural Model*/ . sem (Po -> iprspot imprich), iterate(100) /// > var(Po@1) /// Set variance to 1 > mean(Po@0) /// Set mean to 0 > group(country) /// > ginvariant(none) // constrain loadings to be equal . est store configural /*Store model named configural*/
Code:
. /*Metric Model*/ . sem (Po -> iprspot imprich), iterate(100) /// > var(1:Po@1) /// Set variance to 1 > mean(Po@0) /// Set mean to 0 > group(country) /// > ginvariant(mcoef) /* constrain loadings to be equal */ . est store metric
Code:
. /*Comparison of metric and configural model*/ . lrtest metric configural, stats // configural invariance Likelihood-ratio test LR chi2(4) = 198.15 (Assumption: metric nested in configural) Prob > chi2 = 0.0000 Akaike's information criterion and Bayesian information criterion ----------------------------------------------------------------------------- Model | N ll(null) ll(model) df AIC BIC -------------+--------------------------------------------------------------- metric | 47,974 . -155780 145 311850.1 313122.9 configural | 47,974 . -155681 149 311659.9 312967.9 ----------------------------------------------------------------------------- Note: BIC uses N = number of observations. See [R] BIC note.
Code:
. /*Configural Model*/ . sem (Ac -> ipshabt ipsuces), iterate(100) /// > var(Ac@1) /// Set variance to 1 > mean(Ac@0) /// Set mean to 0 > group(country) /// > ginvariant(none) . est store configural /*Store model named configural*/
Code:
. /*Metric Model*/ . sem (Ac -> ipshabt ipsuces), iterate(100) /// > var(1:Ac@1) /// Set variance to 1 > mean(Ac@0) /// Set mean to 0 > group(country) /// > ginvariant(mcoef) /* constrain loadings to be equal */ . est store metric /*Store model named metric*/
Code:
. /*Compare metric to configural model*/ . lrtest metric configural, stats // Configural Varianz Likelihood-ratio test LR chi2(1) = -244.13 (Assumption: configural nested in metric) Prob > chi2 = 1.0000 Akaike's information criterion and Bayesian information criterion ----------------------------------------------------------------------------- Model | N ll(null) ll(model) df AIC BIC -------------+--------------------------------------------------------------- configural | 48,034 . -154901.7 145 310093.5 311366.5 metric | 48,034 . -155023.8 146 310339.6 311621.5 ----------------------------------------------------------------------------- Note: BIC uses N = number of observations. See [R] BIC note.
Thanks in advance and best regards
Amelie
-------------------------------------------------
1
- Davidov Eldad, Peter Schmidt, and Shalom H. Schwartz. 2008. Bringing values back in: Testing the adequacy of the European Social Survey to measure values in 20 countries. Public Opinion Quarterly 72:420–445. https://doi.org/10.1093/poq/nfn035
- Davidov, Eldad. 2008. A cross-country and cross-time comparison of the human values measurements with the second round if the European Social Survey. Survey Research Methods 2 (1):33–46. https://doi.org/10.18148/srm/2008.v2i1.365
- Davidov, Eldad. 2010. Testing for comparability of human values across countries and time with the third round of European Social Survey. International Journal of Comparative Sociology 58 (3):171–191. https://doi.org/10.1177/0020715210363534