Dear Statalisters,
I am working with the -xtpmg- command by Blackburne and Frank, but I am not sure how to interpret the reported standard errors of group specific long-run multipliers.
As the Output 1 shows, I get a standard error of 0.212 for the long-run multiplier in Norway (i.e. ccode_578ec). However, when I run an error correction model (ECM) and Bewley-transformation on Norway (see Output 2), the corresponding standard error is 0.023 (see De Boufe and Keele (2008) for a further discussion of the ECM and Bewley-transformation). Why do these standard errors differ?
The variables are log(CO2 emissions per capita) and log(GDP per capita), and the dataset is balanced from 1950 to 2008. Also, I am using Stata/SE 13.0.
It might be of relevance that I eventually want to use the -xtpmg- to fit Chudic and Pesaran's (2015) dynamic common correlated effects mean group estimator, but for now I am just trying to get familiar with the command.
Please let me know if any necessary information is missing, and I will upload it.
Output 1

Output 2



Lastly, two related questions:
1) To me, it seems that the standard errors for average coefficients in -xtpmg (...), mg- indicate whether the group specific coefficients are homogenous (low std err) or not (high std err) (see: eq (5) on page 200*). It is not clear to me if this is a good measure of the accuracy of predictions made with the regression line, as I assume that group specific coefficients can have high standards errors and still be homogenous. Furthermore, I thought that an important reason to apply the mean group estimator was to allow for parameter heterogeneity, but formula for variances (see: eq (5) on page 200) seems to penalize it. I think that it would be more appropriate to calculate the unweighted average of standard errors for group specific coefficients. Can anyone give an intuitive explanation of why the standard errors for average coefficients in -xtpmg (...), mg- are preferable? I was unable to find this answer in Pesaran and Smith (1995).
*Take the square root of (5) to get the standard errors.
2) Blackburne and Frank's (2007) article in the Stata Journal seem to suggest that one should include a first-differences operationalization and a contemporaneous levels operationalization of the independent variables on the right hand side of the equation when an ECM is specified with the mean group estimatior, but De Boufe and Keele (2008) seem to suggest that the level variable should be lagged. I have tried both these approaches to see if there are any differences, and the coefficient for the first-differenced variable seems to be deflated when the level variable is not lagged. The remaining coefficients and standard errors do not seem to differ. Should the Blackburne and Frank's mean group ECM be interpreted differently from the ECM that De Bouf and Keele suggest?
In advance, thanks!
Ole Martin Lægreid
PhD-student, Department of Political Science
University of Gothenburg
I am working with the -xtpmg- command by Blackburne and Frank, but I am not sure how to interpret the reported standard errors of group specific long-run multipliers.
As the Output 1 shows, I get a standard error of 0.212 for the long-run multiplier in Norway (i.e. ccode_578ec). However, when I run an error correction model (ECM) and Bewley-transformation on Norway (see Output 2), the corresponding standard error is 0.023 (see De Boufe and Keele (2008) for a further discussion of the ECM and Bewley-transformation). Why do these standard errors differ?
The variables are log(CO2 emissions per capita) and log(GDP per capita), and the dataset is balanced from 1950 to 2008. Also, I am using Stata/SE 13.0.
It might be of relevance that I eventually want to use the -xtpmg- to fit Chudic and Pesaran's (2015) dynamic common correlated effects mean group estimator, but for now I am just trying to get familiar with the command.
Please let me know if any necessary information is missing, and I will upload it.
Output 1
Output 2
Lastly, two related questions:
1) To me, it seems that the standard errors for average coefficients in -xtpmg (...), mg- indicate whether the group specific coefficients are homogenous (low std err) or not (high std err) (see: eq (5) on page 200*). It is not clear to me if this is a good measure of the accuracy of predictions made with the regression line, as I assume that group specific coefficients can have high standards errors and still be homogenous. Furthermore, I thought that an important reason to apply the mean group estimator was to allow for parameter heterogeneity, but formula for variances (see: eq (5) on page 200) seems to penalize it. I think that it would be more appropriate to calculate the unweighted average of standard errors for group specific coefficients. Can anyone give an intuitive explanation of why the standard errors for average coefficients in -xtpmg (...), mg- are preferable? I was unable to find this answer in Pesaran and Smith (1995).
*Take the square root of (5) to get the standard errors.
2) Blackburne and Frank's (2007) article in the Stata Journal seem to suggest that one should include a first-differences operationalization and a contemporaneous levels operationalization of the independent variables on the right hand side of the equation when an ECM is specified with the mean group estimatior, but De Boufe and Keele (2008) seem to suggest that the level variable should be lagged. I have tried both these approaches to see if there are any differences, and the coefficient for the first-differenced variable seems to be deflated when the level variable is not lagged. The remaining coefficients and standard errors do not seem to differ. Should the Blackburne and Frank's mean group ECM be interpreted differently from the ECM that De Bouf and Keele suggest?
In advance, thanks!
Ole Martin Lægreid
PhD-student, Department of Political Science
University of Gothenburg
Comment