Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Eregress (Stata 15)

    The new package looks very promising. Thank you for providing another (much improved) version of the (in my opinion) best package for applied work.

    It just appears as if the "eregress" command could really help me with my ongoing project. However, as long as my institutions do not upgrade, I thought about purchasing a student-version myself. As it comes I have to hold a presentation next week... And would, therefore, like to implement it as fast as possible.

    Is there either a way to obtain a download version even if you are not from an eligible country?

    Alternatively, does there exist a Stata workaround I could use for this type of regression:

    Add sample selection to endogenous treatment.

    eregress y x1 , endogenous( x2 = x3 x4) entreat( treated = x2 x3 x5) select(selected = x2 x6),
    ?

    as stated here: https://www.stata.com/new-in-stata/e...ession-models/
    Last edited by Justus F.C. Meyer; 07 Jun 2017, 06:53.

  • #2
    Maybe ask for an evaluation copy?

    http://www.stata.com/customer-service/evaluate-stata/
    -------------------------------------------
    Richard Williams, Notre Dame Dept of Sociology
    StataNow Version: 19.5 MP (2 processor)

    EMAIL: [email protected]
    WWW: https://academicweb.nd.edu/~rwilliam/

    Comment


    • #3
      Thank you for the link. I have requested such a copy and also wrote the support team of Stata an email. Don't you see any way of implementing the command in another way maybe?

      Comment


      • #4
        cmp (available from SSC) estimates numerous models. You might see if it can do what you want.
        -------------------------------------------
        Richard Williams, Notre Dame Dept of Sociology
        StataNow Version: 19.5 MP (2 processor)

        EMAIL: [email protected]
        WWW: https://academicweb.nd.edu/~rwilliam/

        Comment


        • #5
          Hi All

          I am new for using stata. But if I may ask, what is the main difference when you regress using ivregress and eregress when you only have endogenity problem in your equation? Does eregress use a maximum likelihood to estimate the coefficient? or is there another differences?

          Thank you.

          Comment


          • #6
            -eregress- uses maximum likelihood to estimate the coefficients of the main equation, the endogenous regressor equations, and the variance and correlation parameters.

            You can use the linear prediction (fitted values) to estimate covariate effects. This can be calculated after -ivregress- or -eregress-.

            The conditional mean is used when you want to make a prediction for given values of the covariates. -eregress- allows prediction of the mean of the response conditional on the covariates and instruments. -ivregress- does not.

            Let me show you an example comparing -ivregress- to -eregress-. First we load the class of 2010 data and estimate the model using the limited-information maximum likelihood estimator of -ivregress-. This will give us the same coefficient estimates as -eregress-. Then we calculate the linear prediction.

            Code:
            . webuse class10
            (Class of 2010 profile)
            
            . ivregress liml gpa income (hsgpa = i.hscomp)
            
            Instrumental variables (LIML) regression          Number of obs   =      1,528
                                                              Wald chi2(2)    =    1167.79
                                                              Prob > chi2     =     0.0000
                                                              R-squared       =     0.6444
                                                              Root MSE        =     .37908
            
            ------------------------------------------------------------------------------
                     gpa |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
            -------------+----------------------------------------------------------------
                   hsgpa |   1.235868   .1336861     9.24   0.000     .9738484    1.497888
                  income |   .0575145   .0055174    10.42   0.000     .0467007    .0683284
                   _cons |  -1.217141   .3828614    -3.18   0.001    -1.967535   -.4667464
            ------------------------------------------------------------------------------
            Instrumented:  hsgpa
            Instruments:   income 2.hscomp 3.hscomp
            
            . predict ivregxb
            (option xb assumed; fitted values)
            Now we fit the same model with eregress. Then we predict the conditional mean.

            Code:
            . eregress gpa income, endogenous(hsgpa = income i.hscomp)
            
            Iteration 0:   log likelihood = -638.58598  
            Iteration 1:   log likelihood = -638.58194  
            Iteration 2:   log likelihood = -638.58194  
            
            Extended linear regression                      Number of obs     =      1,528
                                                            Wald chi2(2)      =    1167.79
            Log likelihood = -638.58194                     Prob > chi2       =     0.0000
            
            ------------------------------------------------------------------------------
                         |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
            -------------+----------------------------------------------------------------
            gpa          |
                  income |   .0575145   .0055174    10.42   0.000     .0467007    .0683284
                   hsgpa |   1.235868    .133686     9.24   0.000     .9738484    1.497888
                   _cons |  -1.217141   .3828614    -3.18   0.001    -1.967535   -.4667464
            -------------+----------------------------------------------------------------
            hsgpa        |
                  income |   .0356403   .0019553    18.23   0.000     .0318079    .0394726
                         |
                  hscomp |
               moderate  |  -.1310549   .0136503    -9.60   0.000    -.1578091   -.1043008
                   high  |  -.2331173   .0232712   -10.02   0.000     -.278728   -.1875067
                         |
                   _cons |   2.951233   .0164548   179.35   0.000     2.918982    2.983483
            -------------+----------------------------------------------------------------
               var(e.gpa)|   .1436991   .0083339                      .1282592    .1609977
             var(e.hsgpa)|   .0591597   .0021403                        .05511     .063507
            -------------+----------------------------------------------------------------
            corr(e.hsgpa,|
                   e.gpa)|   .2642138   .0832669     3.17   0.002     .0948986    .4186724
            ------------------------------------------------------------------------------
            
            . predict eregmean
            (option mean assumed; mean of gpa)
            We can see that the conditional mean is a better predictor of GPA than the linear prediction by comparing the mean squared errors.

            Code:
            . gen seivregxb = (gpa-ivregxb)^2
            (972 missing values generated)
            
            . gen seeregmean = (gpa-eregmean)^2
            (972 missing values generated)
            
            . sum seivregxb seeregmean
            
                Variable |        Obs        Mean    Std. Dev.       Min        Max
            -------------+---------------------------------------------------------
               seivregxb |      1,528    .1436991     .166354   2.50e-07    1.30077
              seeregmean |      1,528    .1336676    .1659806   6.28e-08   1.582651

            Comment


            • #7
              Thank you Charles for the clear explanation

              Comment

              Working...
              X