Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Change from -reg- to -prais- makes R-square missing

    Dear all,

    I am estimating trend rate by fitting the equation using OLS (-reg-), y=a+bx, where x is the time variable and b is the trend rate. Due to the existence of 1st-order autocorrelation, I have changed to use -prais- (Paris-Winsten estimation). However, the model becomes totally insignificant with Prob. >F=1.000 and R-square missing. However, if I use -prais, corc- (Cochrane-Orcutt estimation), then Prob.>F=0.8.

    I would like to ask why using Prais-Winsten estimation (-prais-) leads to such a poor result. Stata manual has mentioned that for small sample size (n=20 in my case), Prais-Winsten has a "significant advantage", due to its preservation of the first observation.

    Thank you very much!

    The following is the output.

    Code:
    prais var1 var2
    
    Iteration 0:  rho = 0.0000
    Iteration 1:  rho = 0.4025
    Iteration 2:  rho = 0.4098
    Iteration 3:  rho = 0.4101
    Iteration 4:  rho = 0.4101
    Iteration 5:  rho = 0.4101
    
    Prais-Winsten AR(1) regression -- iterated estimates
    
          Source |       SS           df       MS      Number of obs   =        20
    -------------+----------------------------------   F(1, 18)        =      0.00
           Model |           0         1           0   Prob > F        =    1.0000
        Residual |  9.02919029        18  .501621683   R-squared       =         .
    -------------+----------------------------------   Adj R-squared   =         .
           Total |  8.41900918        19  .443105746   Root MSE        =    .70825
    
    ------------------------------------------------------------------------------
            var1 |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
            var2 |    .025779   .0421113     0.61   0.548    -.0626935    .1142515
           _cons |   1.729973   .5127416     3.37   0.003     .6527428    2.807203
    -------------+----------------------------------------------------------------
             rho |   .4101012
    ------------------------------------------------------------------------------
    Durbin-Watson statistic (original)    1.126068
    Durbin-Watson statistic (transformed) 1.972264
    Last edited by Alex Mai; 07 Jan 2018, 08:46.

  • #2
    Providing your data using dataex would have helped here - I could check some things.

    While reg with robust se's is consistent, prais uses the GLS approach to correct the beta's for serial correlation in the errors. What is odd in your model is that you get exactly zero explained variance with prais. I suspect this suggests that your var2 looks almost identical to serially correlated errors. Is var2 a lag of var1 or something? Also, do you get the same problem with Cochrane–Orcutt?

    Comment


    • #3
      Originally posted by Phil Bromiley View Post
      Providing your data using dataex would have helped here - I could check some things.

      While reg with robust se's is consistent, prais uses the GLS approach to correct the beta's for serial correlation in the errors. What is odd in your model is that you get exactly zero explained variance with prais. I suspect this suggests that your var2 looks almost identical to serially correlated errors. Is var2 a lag of var1 or something? Also, do you get the same problem with Cochrane–Orcutt?

      Thank you so much! Please find the data below. Var2 is just the time variable. Under Prais-Winsten estimation, I got zero explained variance. I have also tried Cochrane-Orcutt (prais, corc), and it works well. I am not sure if I should use Cochrane-Orcutt estimation or Prais-Winsten estimation. I learn that they are essentially identical. Stata manual suggests that for small sample size, P-W has an advantage that it keeps the first observation. And I read from another paper that P-W is marginally more efficient than C-O. So I have no idea actually.

      Moreover, I find that some authors use maximum-likelihood procedure to correct for autocorrelation. So may I ask which of them is the better method, maximum-likelihood or Feasible GLS? And if possible, could you please tell me the Stata code of maximum-likelihood for correcting autocorrelation?

      Many thanks again!

      Code:
      * Example generated by -dataex-. To install: ssc install dataex
      clear
      input float var1 byte var2
       1.349197  1
       3.006986  2
      2.2794564  3
      1.2075024  4
      2.2000134  5
        3.77871  6
       2.576145  7
       2.660806  8
       5.795158  9
      4.1771464 10
       5.109554 11
        4.82035 12
      2.6444435 13
       2.671431 14
       1.761404 15
      2.1607766 16
       2.327385 17
        2.89884 18
       2.363969 19
      2.0101905 20
      end

      Comment

      Working...
      X