Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Too significant... too good to be true?

    I am doing an event study, where I measure the CARs of 78 firms over the index.
    I measure whether the CARs are different for firms that are politically connected (therefore in the regression I add a connecteddummy)
    I used the following codes:

    use "file with returns of firms"
    sort _j date
    by _j: gen datenum=_n
    format %td date
    by _j: gen target=datenum if date==td(09nov2007)
    sort date
    egen td=min(target), by(_j)
    drop target
    gen dif=datenum-td
    sort _j
    by _j: gen event_window=1 if dif>=-10 & dif<=60
    by _j: gen estimation_window=1 if dif<-50
    replace event_window=0 if event_window==.
    replace estimation_window=0 if estimation_window==.
    merge m:1 date using "file with the indexreturns"
    set more off
    gen predicted_return=.
    forvalues i=1/78{
    l _j if _j==`i' & dif==0
    reg returns returnindex if _j==`i' & estimation_window==1
    predict p if _j==`i'
    replace predicted_return = p if _j==`i' & event_window==1
    drop p
    }

    sort _j date
    gen abnormal_return=returns-predicted_return if event_window==1
    by _j: egen cumulative_abnormal_return = sum(abnormal_return)
    drop _merge
    merge m:m mnem using "file with descriptives of the firms"
    sort conn
    by conn: reg cumulative_abnormal_return if dif==0, robust
    reg cumulative_abnormal_return if dif==0, robust
    regress cumulative_abnormal_return connecteddummy size leverage i.sic2

    But now, all my variables are significant with p-values of 0.000, different t-values though.
    This seems to good to be true... Is there any mistake I made in the codes?
    I am doubting the most about the part I made bold, since I do not follow the princeton guidelines there exactly (https://dss.princeton.edu/online_hel...ventstudy.html)

    Thank you!!!!

  • #2
    Barbara:
    check whether -merge m:m- (that is never a good idea; see -merge- entry in Stata .pdf manual) messed things up.
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      Thank for your reply Carlo!
      However, the m:m was an exact match and nothing was messed up...
      Any other ideas what might have gone wrong?

      Comment


      • #4
        A very low P-value means that the sample size is large enough to be clear that the parameter estimate is different from zero. Sometimes that's key information and sometimes it doesn't help much. You need to look at what is being estimated! Also, worry a bit, even with very large sample sizes, whether other ideal conditions for the regression are satisfied. You're using robust standard errors, which will help.

        Comment


        • #5
          Thank you Nick!
          I agree that I should worry .
          I already used robust standard errors, same 0.000 p values.

          I estimated ARs and CARs for every firm in this event study.
          The ARs do not appear to be significantly different from zero, but the CARs in total (and in the two subgroups, nonconnected and connected) are.
          I am not sure whether that will help explain why p values of 0.000 are occurring?

          Comment


          • #6
            Like Carlo, I am a little worried about the m:m merge. I've never read a good word about it, and there are usually much better ways to get the merge you want.

            Beyond that, it is hard to say, especially since we can't see your output. In particular, what was your N after all these merges? 78, or 78,000? Showing us your output could help, Use code tags so it is legible; see p, #12 in the FAQ.
            -------------------------------------------
            Richard Williams, Notre Dame Dept of Sociology
            StataNow Version: 19.5 MP (2 processor)

            EMAIL: [email protected]
            WWW: https://www3.nd.edu/~rwilliam

            Comment


            • #7
              Thank you Richard!
              Dataex gives an error r(1000) that the statement exceeds linesize limit..
              I looked at the number of observations of cumulative_abnormal_return. Before the m:m it was 14,768, since every every trading day within a firm gets assigned the cumulative_abnormal_return of the firm. After the m:m the number of observations of cumulative_abnormal_return is still 14,768.....
              I really hope you can help me with this, I am so stuck here..

              Comment


              • #8
                Barbara:
                unfortunately, -m:m merge- can give the expected numeber of observations, but the wrong correspondence among them (say, the value of -var1- that originally corresponded to -id1- merges with -id2- and so dramatically on).
                That's why no Stata pretty experienced user will ever consider -m:m merge- as a trustworthy option.
                See -Troubleshooting m:m merges-, -merge- entry, Stata .pdf manual.
                Last edited by Carlo Lazzaro; 18 May 2018, 01:16.
                Kind regards,
                Carlo
                (Stata 19.0)

                Comment


                • #9
                  Okay, thank you Carlo!
                  I tried to look for alternatives of -merge m:m-, and came to -joinby-. Would that be a way to combine my dataset with (cumulative)returns for each firm on multiple days with the static data per firm about their leverage, size etc in year -1?

                  Comment


                  • #10
                    Barbara:
                    since I do not know your dataset, I cannot say.
                    Kind regards,
                    Carlo
                    (Stata 19.0)

                    Comment


                    • #11
                      Code:
                       . regress car conn size leverage i.sic2
                      
                            Source |       SS           df       MS      Number of obs   =    14,768
                      -------------+----------------------------------   F(18, 14749)    =    336.98
                             Model |  12.3409787        18  .685609925   Prob > F        =    0.0000
                          Residual |  30.0077419    14,749  .002034561   R-squared       =    0.2914
                      -------------+----------------------------------   Adj R-squared   =    0.2905
                             Total |  42.3487206    14,767  .002867794   Root MSE        =    .04511
                      
                      ------------------------------------------------------------------------------
                               car |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                      -------------+----------------------------------------------------------------
                              conn |  -.0141297   .0009227   -15.31   0.000    -.0159384   -.0123211
                              size |  -.0043294   .0003849   -11.25   0.000    -.0050838    -.003575
                          leverage |  -.0198001    .002426    -8.16   0.000    -.0245553    -.015045
                                   |
                              sic2 |
                               15  |   .0135987   .0023398     5.81   0.000     .0090123     .018185
                               20  |   .0535925   .0026292    20.38   0.000      .048439     .058746
                               27  |   -.009059   .0024154    -3.75   0.000    -.0137935   -.0043245
                               32  |  -.0246615   .0020912   -11.79   0.000    -.0287605   -.0205624
                               35  |   .0171415   .0025581     6.70   0.000     .0121274    .0221556
                               36  |   -.019383   .0025865    -7.49   0.000    -.0244528   -.0143132
                               42  |  -.0813206   .0028621   -28.41   0.000    -.0869308   -.0757105
                               50  |    .012346   .0026598     4.64   0.000     .0071324    .0175595
                               60  |   .0390304   .0031524    12.38   0.000     .0328513    .0452095
                               62  |   .0078944   .0025761     3.06   0.002     .0028449    .0129439
                               65  |  -.0444362    .002001   -22.21   0.000    -.0483584    -.040514
                               70  |   -.000822   .0021413    -0.38   0.701    -.0050191    .0033751
                               73  |  -.0207549   .0025759    -8.06   0.000     -.025804   -.0157059
                               75  |  -.0457323    .002867   -15.95   0.000     -.051352   -.0401126
                               87  |  -.0552378   .0028689   -19.25   0.000    -.0608612   -.0496145
                                   |
                             _cons |   .0303937   .0050335     6.04   0.000     .0205275    .0402599
                      ------------------------------------------------------------------------------
                      
                      .
                      I found out how to show my output Richard.
                      Is it strange that Stata assigned the cumulative_abnormal_returns of each firm to each of the firm's trading dates, and thereby that I have way more than 78 observations of cumulative_abnormal_return itself (14,768)?

                      Thank you for your reply Carlo. Too bad...

                      Comment


                      • #12
                        P reported as 0.000 just means <0.0005. I don't know if that was puzzling.

                        Comment


                        • #13
                          Didn't know that indeed, but still 'too good to be true' I guess..

                          Comment


                          • #14
                            It's your data and your field and not mine, but at the heart of your analysis in #1 are many regressions


                            Code:
                            regress returns returnindex
                            How plausible is zero slope for your variables? Are you knocking over a straw person there?

                            Last edited by Nick Cox; 18 May 2018, 04:55.

                            Comment


                            • #15
                              Thank you for thinking with me.
                              Also here a p value of 0.000

                              Comment

                              Working...
                              X