Dear all,
I am working with a panel dataset on a GMM estimation of some parameters.
Using
I estimated the coefficients of a regression of the form y = Xb + u, then I extracted residuals using
and saved them.
Following the paper by Hubbard (1994), the model I am trying to estimate assumes that residuals r_it are a linear function of an AR(1) u_it plus an additional error term v_it.
In formulas: r_it = u_it + v_it = a*u_it-1 + e_t + v_it, where e_t is the error of the AR(1) process and a is the autocorrelation coefficient.
To estimate separately all the pieces, the paper uses a GMM estimation equating theoretical moments with empirically estimated moments.
In detail, let C_k be the covariance of delta_r = r_it - r_it-1: E[ (r_it - r_it-1)*(r_it-k - r_it-k-1) ]. Then after some tedious algebra:
C_0 = 2*{sigma_e}/(1+{a})+2*{sigma_u}
C_1 = {sigma_e}*(1-{a})/(1+{a})-{sigma_u}
C_2 = -{a}^1*{sigma_e}*(1-{a})/(1+{a})
...
C_6 = -{a}^5*{sigma_e}*(1-{a})/(1+{a})
Parameters to be estimated are obviously {sigma_e}, {a} and {sigma_u}.
How can I implement this estimation/minimisation problem using the GMM method?
I tried to calculate empirically the series of the covariances:
Then to implement GMM i tried:
However the code does not work and gives as error:
I suppose this comes from a misspecified command and not the error itself. How can I fix this?
Thanks for the attention, apologies for the long question.
Best,
Luca
I am working with a panel dataset on a GMM estimation of some parameters.
Using
Code:
xtreg
Code:
predict
Following the paper by Hubbard (1994), the model I am trying to estimate assumes that residuals r_it are a linear function of an AR(1) u_it plus an additional error term v_it.
In formulas: r_it = u_it + v_it = a*u_it-1 + e_t + v_it, where e_t is the error of the AR(1) process and a is the autocorrelation coefficient.
To estimate separately all the pieces, the paper uses a GMM estimation equating theoretical moments with empirically estimated moments.
In detail, let C_k be the covariance of delta_r = r_it - r_it-1: E[ (r_it - r_it-1)*(r_it-k - r_it-k-1) ]. Then after some tedious algebra:
C_0 = 2*{sigma_e}/(1+{a})+2*{sigma_u}
C_1 = {sigma_e}*(1-{a})/(1+{a})-{sigma_u}
C_2 = -{a}^1*{sigma_e}*(1-{a})/(1+{a})
...
C_6 = -{a}^5*{sigma_e}*(1-{a})/(1+{a})
Parameters to be estimated are obviously {sigma_e}, {a} and {sigma_u}.
How can I implement this estimation/minimisation problem using the GMM method?
I tried to calculate empirically the series of the covariances:
Code:
gen delta_r = r - l.r gen mean_r = mean(r) gen C_0 = (delta_r - mean_r)^2 gen C_1 = (delta_r - mean_r)*(l.delta_r - mean_r) ... gen C_6 = (delta_r - mean_r)*(l6.delta_r - mean_r)
Code:
gmm (C_0 - 2*{sigma_e}/(1+{a})+2*{sigma_u}) \\\ (C_1 + {sigma_e}*(1-{a})/(1+{a})-{sigma_u}) \\\ (C_2 + {a}*{sigma_e}*100*(1-{a})/(1+{a})) \\\ ... (C_6 + {a}^5*{sigma_e}*(1-{a})/(1+{a})), \\\ instruments(C_0 C_1 C_2 C_3 C_4 C_5 C_6) winitial(identity)
Code:
could not calculate numerical derivatives -- flat or discontinuous region encountered r(430);
Thanks for the attention, apologies for the long question.
Best,
Luca