Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Calculating p-values for aggregated average treatment effects

    Hi. I am studying the treatment effect of an intervention using a matching based approach, where treated states are matched to untreated states. I then conduct a difference-in-difference analysis by running individual event-study regressions. To understand the overall treatment effect, I aggregate the estimates from the individual event-study regressions and calculate the average effect of all treatment (or subgroups of treatment with similar characteristics). This is similar to a stacked regression, while leveraging custom weights to calculate more efficient averages. However, I am flummoxed when it comes to calculating the p-values of the stacked regression. I can calculate the p-values for the individual event-study regressions, but I'm not sure how to do that for the stacked regression. Any help will be much appreciated.

    Below is a sample of my data and the related code. Part 1 first sets-up the code for the individual regressions. Part 2, runs the individual event studies, and Part 3 outputs the coefficients and SE for the stacked regression


    Code:
    
    *DATA SAMPLE
    
    clear
    input str50 state byte stepone str13 Treated_State byte(weeknum adopt_week) float(tested confirmed_1 recovered_1 deceased_1)
    "Bihar"                       1 "Bihar" 37 22   10.968039       .190893      .1850546   .0010001912
    "Bihar"                       0 "Bihar" 14 22   .08867207   .0045832135   .0023129422 .000027012813
    "Bihar"                       1 "Bihar" 24 22   1.5934727     .09181153     .06732514  .00046399885
    "Bihar"                       0 "Bihar"  9 22  .023821475     .00044607  .00013972557 3.5857715e-06
    "Bihar"                       1 "Bihar" 29 22    5.068347     .14344008     .13148248   .0007292264
    "Bihar"                       1 "Bihar" 36 22   10.247828     .18750395      .1810866    .000966963
    "Bihar"                       0 "Bihar" 20 22    .3337463    .024219856    .015814569  .00016518455
    "Bihar"                       1 "Bihar" 39 22   12.378263      .1976383     .19192722   .0010621055
    "Bihar"                       0 "Bihar" 12 22   .04013124   .0024207544   .0007249235 .000011235418
    "Bihar"                       0 "Bihar"  4 22           0 .000017450755  7.171544e-07  7.171544e-07
    "Bihar"                       1 "Bihar" 42 22   14.507592     .20766734      .2023336   .0011390802
    "Bihar"                       1 "Bihar" 28 22    4.260896     .13483803     .12253084   .0007003012
    "Bihar"                       1 "Bihar" 22 22   .58980757     .05235824    .033925343   .0002959457
    "Bihar"                       0 "Bihar" 11 22   .04245099    .001358888    .000448341   7.53012e-06
    "Bihar"                       0 "Bihar" 21 22    .4104603    .036436457     .02434966  .00022243737
    "Bihar"                       1 "Bihar" 26 22   2.7848125     .11582927      .1013988   .0005929671
    "Bihar"                       1 "Bihar" 23 22   1.0034382     .07272435      .0476965   .0003849923
    "Bihar"                       1 "Bihar" 45 22   16.280634     .21531174     .21075015    .001205656
    "Bihar"                       1 "Bihar" 47 22    17.28098      .2176378     .21473382    .001243187
    "Bihar"                       0 "Bihar"  3 22           0   3.34672e-06             0             0
    "Bihar"                       1 "Bihar" 48 22    5.016259     .06231139     .06166022   .0003585772
    "Bihar"                       0 "Bihar" 13 22   .06862139    .003441026   .0015243116  .00002067795
    "Bihar"                       0 "Bihar"  5 22 .0021611445 .000034781988  8.964429e-06    8.3668e-07
    "Bihar"                       1 "Bihar" 25 22   2.1707711     .10481963     .08708943   .0005343995
    "Bihar"                       1 "Bihar" 31 22    6.606687       .159154     .14880905   .0007728533
    "Bihar"                       0 "Bihar" 15 22   .11010375    .005678189   .0037403186   .0000334672
    "Bihar"                       0 "Bihar" 16 22    .1419289    .006759179    .005025698  .00004458309
    "Bihar"                       1 "Bihar" 43 22   15.157097     .21073844     .20558017   .0011626267
    "Bihar"                       0 "Bihar" 10 22  .030944014    .000708907   .0003206875  5.259132e-06
    "Bihar"                       1 "Bihar" 33 22    7.866549      .1729872      .1631078   .0008461226
    "Bihar"                       1 "Bihar" 44 22   15.731228     .21312033      .2083853    .001183663
    "Bihar"                       0 "Bihar"  7 22   .01026487   .0001143861 .000035618665   1.67336e-06
    "Bihar"                       0 "Bihar"  6 22  .006518694  .00005976286  .00002402467    8.3668e-07
    "Bihar"                       0 "Bihar" 19 22   .27508557    .015854968     .01087182  .00012155766
    "Bihar"                       1 "Bihar" 46 22   16.790081     .21676815      .2128226   .0012233458
    "Bihar"                       1 "Bihar" 35 22    9.480743     .18312895      .1762636      .0009274
    "Bihar"                       1 "Bihar" 40 22    13.10717     .20105493     .19557564   .0010884013
    "Bihar"                       1 "Bihar" 32 22     7.18386      .1659875      .1561607     .00080393
    "Bihar"                       1 "Bihar" 41 22   13.814168     .20439556     .19904032   .0011118283
    "Bihar"                       1 "Bihar" 38 22    11.61945     .19399765     .18851824   .0010309094
    "Bihar"                       1 "Bihar" 34 22    8.671193     .17866656     .16996786     .00088963
    "Bihar"                       0 "Bihar" 18 22    .2263394    .010706636    .007791762  .00008283133
    "Bihar"                       0 "Bihar"  8 22   .01659794   .0003028782  .00005581851 1.7928858e-06
    "Bihar"                       1 "Bihar" 27 22    3.595409     .12605326       .112047    .000643885
    "Bihar"                       0 "Bihar" 17 22    .1847392    .008343254    .006337254   .0000580895
    "Bihar"                       1 "Bihar" 30 22    5.962443     .15187284     .14039527   .0007506215
    "Kerala"                      0 "Bihar" 33 22   11.500824     1.0108571        .73942    .003441993
    "Andaman and Nicobar Islands" 0 "Bihar" 16 22    3.484779       .013638    .009859662             0
    "Andaman and Nicobar Islands" 0 "Bihar"  9 22           0    .008312343    .007916517             0
    "Goa"                         0 "Bihar" 42 22    24.99102       3.26321      3.152922     .04703154
    "Kerala"                      0 "Bihar" 23 22   2.9451056      .1053519    .067913376   .0003457041
    "Assam"                       0 "Bihar" 36 22   14.292813      .6104815     .59098816   .0027706614
    "Goa"                         0 "Bihar" 33 22   18.456299     2.6669946      2.418868    .035946194
    "Haryana"                     0 "Bihar" 20 22    1.635232      .0961351    .073335856    .001267538
    "Sikkim"                      0 "Bihar" 40 22    9.693072      .7866824      .6995482    .017555937
    "Andaman and Nicobar Islands" 0 "Bihar" 27 22    10.32965      .8549838      .7644117     .01270241
    "Himachal Pradesh"            0 "Bihar" 15 22    .7725636    .007716243    .004821918  .00009589041
    "Uttar Pradesh"               0 "Bihar" 38 22    8.136458      .2360498      .2218127   .0033841254
    "West Bengal"                 0 "Bihar" 12 22   .16232093    .004200241    .001553936   .0002930675
    "Kerala"                      0 "Bihar" 21 22    2.028432     .05916177    .030607015   .0001903406
    "Tamil Nadu"                  0 "Bihar" 18 22    1.870065     .15681656     .09355383    .002163004
    "Sikkim"                      0 "Bihar" 15 22     .779389     .01026248    .000989673             0
    "Tamil Nadu"                  0 "Bihar" 16 22   1.2541747     .08605569     .04768007   .0010987704
    "Tamil Nadu"                  0 "Bihar" 38 22   15.418982     1.0213224      .9902132     .01537316
    "Haryana"                     0 "Bihar" 43 22   15.681849      .9124741      .8887321    .010078524
    "Telangana"                   0 "Bihar" 20 22    .7892355     .12872688     .09636908    .001157212
    "Assam"                       0 "Bihar" 48 22    5.384646     .18090364     .17842126    .000901475
    "Telangana"                   0 "Bihar" 44 22   18.982489      .7740374      .7559922   .0041744066
    "Tamil Nadu"                  0 "Bihar" 10 22    .3525985    .011223236   .0028275126   .0000766233
    "Kerala"                      0 "Bihar" 36 22   14.798937     1.4142284     1.1824659    .004966751
    "Telangana"                   0 "Bihar" 11 22  .008976741   .0043782145    .002712827  .00010478238
    "DNHnDD"                      0 "Bihar" 48 22           0     .10070013       .099568  .00005958588
    "Goa"                         0 "Bihar" 16 22    3.593646     .05871985    .014981447  .00008348794
    "Haryana"                     0 "Bihar" 15 22    .6720359    .028770726    .013211495    .000394611
    "Kerala"                      0 "Bihar" 22 22   2.4545944      .0800362     .04671317  .00025785458
    "Tamil Nadu"                  0 "Bihar" 14 22    .8242128     .04646787     .02478575   .0004084059
    "Gujarat"                     0 "Bihar" 23 22     1.57717     .10784288      .0830006   .0039638146
    "Assam"                       0 "Bihar" 14 22     .349497    .008937268    .003072264  .00001541339
    "Uttar Pradesh"               0 "Bihar" 36 22    7.281742       .222857     .20943135    .003224936
    "Andaman and Nicobar Islands" 0 "Bihar" 29 22   13.269522      .9310543      .8765743    .013098237
    "Tripura"                     0 "Bihar" 23 22    5.139132     .15989837      .1149585   .0010592614
    "Kerala"                      0 "Bihar" 40 22    19.08958     1.8361642     1.6577562    .007048297
    "Mizoram"                     0 "Bihar" 44 22   15.313794       .355477      .3480465    .000671141
    "Mizoram"                     0 "Bihar" 46 22   16.351917      .3633389      .3568792   .0007550336
    "Tamil Nadu"                  0 "Bihar" 48 22     6.03661      .3163394      .3099539    .004662697
    "Goa"                         0 "Bihar" 22 22    8.970909      .4600928     .32867345   .0038682746
    "Telangana"                   0 "Bihar" 16 22   .17189798     .02581485    .011238582   .0005918477
    "Kerala"                      0 "Bihar"  6 22   .04594774   .0010960854   .0005893239  8.540926e-06
    "DNHnDD"                      0 "Bihar" 46 22           0      .3519142      .3475942  .00020855057
    "Tripura"                     0 "Bihar"  5 22           0 .000021471515             0             0
    "Telangana"                   0 "Bihar" 36 22   12.601858      .6781473      .6244607    .003721118
    "DNHnDD"                      0 "Bihar" 14 22    .9501266    .002591986   .0001936541             0
    "Tamil Nadu"                  0 "Bihar" 45 22   19.744255      1.093329     1.0681666    .016162608
    "Haryana"                     0 "Bihar" 18 22   1.1319395     .06329919     .04768017   .0009660993
    "Kerala"                      0 "Bihar" 15 22   .45039225    .007500966    .003583935  .00005978648
    "Rajasthan"                   0 "Bihar"  8 22   .12059858    .003066296   .0010130388  .00006508298
    "West Bengal"                 0 "Bihar" 39 22    6.106421      .5024678      .4686932    .008746178
    "Assam"                       0 "Bihar" 28 22    8.113241      .4270555      .3418553    .001447609
    "Sikkim"                      0 "Bihar" 11 22   .15182875             0             0             0
    "Sikkim"                      0 "Bihar" 38 22    6.616501      .7212565      .6543245      .0151463
    end
    
    
    
    ////////////////////////////////////////////////
    ////                                        ////
    ////    PART 1. Define regression program      ////
    ////                                        ////
    ////////////////////////////////////////////////
    
    
    capture program drop runEventStudies
    
    program define runEventStudies
        
        args data yList name
        
    
        use "${outdir}\\`data'".dta", clear
        
    
        * group indicator. So it's 1 both before and after the event 
        bysort state: egen treated = max(stepone)
        
        egen eventBlock = group(Treated_State adopt_week)
        
        qui gen post = weeknum > adopt_week //should turn on for the controls based on the state 
        qui gen treated_post = treated * post
        qui gen treated_trend = weeknum * treated
        
        
        qui sum eventBlock
        local N = r(max)
     
        disp "TOTAL: `N'"
     
     
          * Run regressions    - include test rates
        foreach y of local yList {
            
            disp "`y'"
            
            forvalues i = 1/`N' {
                disp "`i'"
                
                * Balanced T, with FEs to control for week 
                areg `y' post treated_post tested i.weeknum if eventBlock == `i'  ///
                        ,absorb(state)  //mine is not a balanced panel - restrict analysis to make it balanced
                local b_`y'_`i'_b = _b[treated_post]
                local se_`y'_`i'_b = _se[treated_post]  
                
            
                }
            }
        
                
    
    *** Step 2: Construct a dataset saving all coefficients and SEs
    
    *keep Treated_State adopt_week eventBlock cem_varlist 
        qui duplicates drop
    
        foreach suff in b  {
            
            foreach y of local yList {
                    
                qui gen b_`y'_`suff' = ""
                qui gen se_`y'_`suff' = ""
            
                forvalues i = 1/`N' {
                    
                    qui replace b_`y'_`suff' = "`b_`y'_`i'_`suff''" if eventBlock == `i'
                    qui replace se_`y'_`suff' = "`se_`y'_`i'_`suff''" if eventBlock == `i'
                    
                    }
                
                qui destring b_`y'_`suff', replace
                qui destring se_`y'_`suff', replace
                
                }
            }
     
        drop eventBlock
        
            * Save the data
            foreach x in Treated_State {
            encode `x', gen(temp)
            drop `x'
            rename temp `x'
            }
     
         * save
        order state weeknum cem_varlist 
        sort state weeknum
        save "${coeffdir}\coefficients_`name'.dta", replace
    
        
    end
    
    
    
    ////////////////////////////////////////////////////////////////////////////////
    ////////////                                                        ////////////
    ////////////             PART 2. Run the event studies                ////////////
    ////////////                                                        ////////////
    ////////////////////////////////////////////////////////////////////////////////
    
    
    * Acquired    
    runEventStudies  "1matched_cohort"                                        /// Data
                    `"confirmed_1 recovered_1 deceased_1"'                     /// yList
                     "one"                                                    /// name
                    
    
    
    
        
    ////////////////////////////////////////////////////////////////////////////////
    ////////////                                                        ////////////
    ////////////             PART 3. Calculate averages                    ////////////
    ////////////                                                        ////////////
    ////////////////////////////////////////////////////////////////////////////////
    
    
    * Uses Inverse variance average: these weights give the smallest SE of the sum of the vars
    
    
    
    local oneVars     "confirmed_1 recovered_1 deceased_1"
    
    
    local sets `" "" "_trend" "'
    
    
    foreach set of local sets {
        
        
        * open file, write header
        file open EScoeff using "${paperdir}\Tables\EScoeff`set'.csv", write replace
    
    
        file write EScoeff "Variable,All states,Early adopters,Late adopters" _n
                    
            foreach sample in one two three four {
            
            use "${coeffdir}\coefficients_`sample'.dta", replace
    
            * Average effect of all states
            foreach var of local `sample'Vars {
                
                * ALL
                qui egen wavg_`var' = wtmean(b_`var'_b`set') ///
                    if treated == 1, weight(1/(se_`var'_b`set'^2))
                qui egen wvar_`var' = mean(1/(se_`var'_b`set'^2)) /// SE command
                    if treated == 1 
                qui replace wvar_`var' = sqrt(1/wvar_`var') ///follows from 206
                local t = wavg_`var'/wvar_`var'
                di 2*ttail(e(df_r),abs(`t´))
                
                df = st_numscalar("e(df_r)")
    t = wavg_`var'/wvar_`var'
    2*ttail(df, abs(t))
    
                
                * record values
                sum wavg_`var'
                local avgAllStates : display %4.2f r(mean)
                drop wavg_`var'
                sum wvar_`var'
                local sdAllStates : display %4.2f r(mean)
                drop wvar_`var'        
    
                
                
                file write EScoeff "`var',`avgAllStates',`avgEarlyStates',`avgLateStates'" _n  
                file write EScoeff `",="(`sdAllStates')",="(`sdEarlyStates')",="(`sdLateStates')""' _n
                
            }        
            
            }
    
        file close EScoeff
        
    }

  • #2
    your trying to get an average effect across multiple "studies", so maybe meta analysis?

    Comment


    • #3
      I cannot follow all what you are doing in #1, but since I asked you to repost your question, I will chime in. What I see in #1 are linear regression models. If you want a single p-value, then the question becomes how are you aggregating the estimates? If you are summing or taking means, then use the combination of suest and lincom (or margins) to obtain the aggregated estimate instead of doing this by hand. Here is an easier to follow example.

      Code:
      webuse grunfeld, clear
      *areg invest mvalue, absorb(company)  
      regress invest mvalue i.company
      est sto m1
      
      *areg kstock mvalue, absorb(company)
      regress kstock mvalue i.company
      est sto m2
      
      suest m1 m2
      lincom (_b[m1_mean:mvalue] + _b[m2_mean:mvalue])/2
      Res.:

      Code:
      . suest m1 m2
      
      Simultaneous results for m1, m2                            Number of obs = 200
      
      ------------------------------------------------------------------------------
                   |               Robust
                   | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
      -------------+----------------------------------------------------------------
      m1_mean      |
            mvalue |   .1898776   .0508028     3.74   0.000     .0903059    .2894492
                   |
           company |
                2  |   250.9496   90.30609     2.78   0.005      73.9529    427.9463
                3  |  -51.44415   91.42052    -0.56   0.574    -230.6251    127.7368
                4  |   169.3784   149.7128     1.13   0.258    -124.0533    462.8101
                5  |   232.7314   172.3858     1.35   0.177    -105.1386    570.6015
                6  |    190.568   163.0109     1.17   0.242    -128.9275    510.0634
                7  |   234.0336   176.5098     1.33   0.185    -111.9193    579.9866
                8  |   130.3806   150.7384     0.86   0.387    -165.0612    425.8225
                9  |   193.4162   167.2998     1.16   0.248    -134.4853    521.3178
               10  |   204.4981   180.3994     1.13   0.257    -149.0783    558.0745
                   |
             _cons |  -214.8799   183.9453    -1.17   0.243     -575.406    145.6462
      -------------+----------------------------------------------------------------
      m1_lnvar     |
             _cons |   8.918995   .3139122    28.41   0.000     8.303738    9.534251
      -------------+----------------------------------------------------------------
      m2_mean      |
            mvalue |    .257216   .1087732     2.36   0.018     .0440245    .4704075
                   |
           company |
                2  |   253.9693   224.8077     1.13   0.259    -186.6457    694.5843
                3  |   367.1194   235.1545     1.56   0.118    -93.77491    828.0136
                4  |   409.2395    348.968     1.17   0.241    -274.7252    1093.204
                5  |   893.5264   398.0466     2.24   0.025     113.3694    1673.683
                6  |   462.5882   376.2297     1.23   0.219    -274.8085    1199.985
                7  |   742.7158   405.5893     1.83   0.067    -52.22459    1537.656
                8  |   379.3704   350.2209     1.08   0.279    -307.0499    1065.791
                9  |   678.3791   385.7032     1.76   0.079    -77.58525    1434.343
               10  |   453.9987   412.7726     1.10   0.271    -355.0208    1263.018
                   |
             _cons |  -466.2992   420.2309    -1.11   0.267    -1289.937    357.3383
      -------------+----------------------------------------------------------------
      m2_lnvar     |
             _cons |   10.79787   .2004445    53.87   0.000       10.405    11.19073
      ------------------------------------------------------------------------------
      
       . lincom (_b[m1_mean:mvalue] + _b[m2_mean:mvalue])/2
      
       ( 1)  .5*[m1_mean]mvalue + .5*[m2_mean]mvalue = 0
      
      ------------------------------------------------------------------------------
                   | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
      -------------+----------------------------------------------------------------
               (1) |   .2235468   .0787706     2.84   0.005     .0691592    .3779343
      ------------------------------------------------------------------------------
      Of course, there are other ways to achieve this such as stacking the data and doing one estimation, but this works for most purposes.
      Last edited by Andrew Musau; 19 Jul 2024, 12:32.

      Comment


      • #4
        It doesn't look like anything interesting is going on with the regression, it's the weighting of the effects in later stages (by the inverse of the squared SE) where you add the wrinkle. I suppose you have your reasons but I don't understand what's happening there.

        In Andrew's method, you can use lincom and add the weight there.

        I recall some ways to aggregate t-values or p-values: e.g., Fisher's method for p-values and Stouffer's Z-score method for t-stats, but not sure the weighting can be applied in those methods.


        Comment


        • #5
          Andrew Musau and George Ford thanks for explaining that. Let me clarify the points you raised and rephrase my question:

          1) I aggregate the estimates by calculating the average effect of all treatment units. The average effect is conceptually equivalent to the output of a stacked regression
          2) I use custom weights when aggregating to calculate more efficient averages. In this case, I weigh each coefficient by the inverse of its standard error.
          3) What I end up with is a single averaged-out coefficient and standard error
          4) The individual regressions and the aggregated estimates in the code above work just well.

          I am trying to calculate the significance level of this average coefficient. To the code above, I added the bit in red below from https://www.stata-journal.com/articl...article=st0137 on Maarten's recommendation in my previous post. However, I get the error "invalid 'local'" and am unable to figure out what I am doing wrong here. Technically, the code should run irrespective of the fact that the final results are aggregated values. I'd great appreciate any guidance on how to do this within the existing code, is possible.


          Code:
          
          local oneVars     "confirmed_1 recovered_1 deceased_1"
          local twoVars     "confirmed_2 recovered_2 deceased_2"
          local threeVars "confirmed_3 recovered_3 deceased_3"
          local fourVars     "confirmed_4 recovered_4 deceased_4"
          
          local sets `" "" "_trend" "'
          
          
          foreach set of local sets {
              
              
              * open file, write header
              file open EScoeff using "${paperdir}\Tables\EScoeff`set'.csv", write replace
          
          
              file write EScoeff "Variable,All states,Early adopters,Late adopters" _n
                          
                  foreach sample in one two three four {
                  
                  use "${coeffdir}\coefficients_`sample'.dta", replace
          
                  * Average effect of all states
                  foreach var of local `sample'Vars {
                      
                      * ALL
                      qui egen wavg_`var' = wtmean(b_`var'_b`set') ///
                          if treated == 1, weight(1/(se_`var'_b`set'^2))
                      qui egen wvar_`var' = mean(1/(se_`var'_b`set'^2)) /// SE command
                          if treated == 1 
                      qui replace wvar_`var' = sqrt(1/wvar_`var') ///follows from 206
          
          
          local t = wavg_`var'/wvar_`var'              
          di 2*ttail(e(df_r),abs(`t´))              
          df = st_numscalar("e(df_r)")            
          t = wavg_`var'/wvar_`var'              
          2*ttail(df, abs(t))               
          
          
                      
                      * record values
                      sum wavg_`var'
                      local avgAllStates : display %4.2f r(mean)
                      drop wavg_`var'
                      sum wvar_`var'
                      local sdAllStates : display %4.2f r(mean)
                      drop wvar_`var'        
          
                      
                      
                 
          /*
                      file write testtable "`var',`avgAllStates'" _n  
                      file write testtable `",="(`sdAllStates')""' _n  
                      */
          
                      file write EScoeff "`var',`avgAllStates',`avgEarlyStates',`avgLateStates'" _n  
                      file write EScoeff `",="(`sdAllStates')",="(`sdEarlyStates')",="(`sdLateStates')""' _n
                      
                  }        
                  
                  }
          
              file close EScoeff
              
          }

          Comment


          • #6
            My response to your approach is already stated in #3. Let Stata handle this computation for you. The weighted average should not be an issue with lincom.

            Code:
            webuse grunfeld, clear
            *areg invest mvalue, absorb(company)
            regress invest mvalue i.company
            local w_1= 1/_se[mvalue]
            di %4.3f `w_1'
            est sto m1
            
            *areg kstock mvalue, absorb(company)
            regress kstock mvalue i.company
            local w_2= 1/_se[mvalue]
            di %4.3f `w_2'
            est sto m2
            
            suest m1 m2
            lincom (`w_1'*_b[m1_mean:mvalue]) + (`w_2'*_b[m2_mean:mvalue])
            Res.:

            Code:
            . webuse grunfeld, clear
            
            .
            . *areg invest mvalue, absorb(company)
            
            .
            . regress invest mvalue i.company
            
                  Source |       SS           df       MS      Number of obs   =       200
            -------------+----------------------------------   F(10, 189)      =    106.36
                   Model |   7947627.4        10   794762.74   Prob > F        =    0.0000
                Residual |  1412316.51       189  7472.57415   R-squared       =    0.8491
            -------------+----------------------------------   Adj R-squared   =    0.8411
                   Total |  9359943.92       199  47034.8941   Root MSE        =    86.444
            
            ------------------------------------------------------------------------------
                  invest | Coefficient  Std. err.      t    P>|t|     [95% conf. interval]
            -------------+----------------------------------------------------------------
                  mvalue |   .1898776   .0179944    10.55   0.000     .1543819    .2253733
                         |
                 company |
                      2  |   250.9496   50.53491     4.97   0.000     151.2647    350.6345
                      3  |  -51.44415   50.99737    -1.01   0.314    -152.0413    49.15302
                      4  |   169.3784   70.98565     2.39   0.018     29.35244    309.4043
                      5  |   232.7314   78.71866     2.96   0.004      77.4514    388.0115
                      6  |    190.568   75.54874     2.52   0.012     41.54088     339.595
                      7  |   234.0336    80.0986     2.92   0.004     76.03153    392.0357
                      8  |   130.3806   71.35614     1.83   0.069    -10.37613    271.1374
                      9  |   193.4162   76.99706     2.51   0.013     41.53223    345.3003
                     10  |   204.4981   81.43403     2.51   0.013     43.86171    365.1345
                         |
                   _cons |  -214.8799   80.34482    -2.67   0.008    -373.3677   -56.39209
            ------------------------------------------------------------------------------
            
            .
            . local w_1= 1/_se[mvalue]
            
            .
            . di %4.3f `w_1'
            55.573
            
            .
            . est sto m1
            
            .
            .
            .
            . *areg kstock mvalue, absorb(company)
            
            .
            . regress kstock mvalue i.company
            
                  Source |       SS           df       MS      Number of obs   =       200
            -------------+----------------------------------   F(10, 189)      =     17.98
                   Model |  8796848.19        10  879684.819   Prob > F        =    0.0000
                Residual |  9245201.08       189  48916.4079   R-squared       =    0.4876
            -------------+----------------------------------   Adj R-squared   =    0.4605
                   Total |  18042049.3       199  90663.5642   Root MSE        =    221.17
            
            ------------------------------------------------------------------------------
                  kstock | Coefficient  Std. err.      t    P>|t|     [95% conf. interval]
            -------------+----------------------------------------------------------------
                  mvalue |    .257216   .0460394     5.59   0.000     .1663988    .3480331
                         |
                 company |
                      2  |   253.9693   129.2956     1.96   0.051    -1.078529    509.0171
                      3  |   367.1194   130.4788     2.81   0.005     109.7375    624.5012
                      4  |   409.2395   181.6196     2.25   0.025     50.97756    767.5014
                      5  |   893.5264   201.4048     4.44   0.000     496.2363    1290.817
                      6  |   462.5882   193.2945     2.39   0.018      81.2965    843.8799
                      7  |   742.7158   204.9354     3.62   0.000     338.4611     1146.97
                      8  |   379.3704   182.5675     2.08   0.039      19.2386    739.5022
                      9  |   678.3791        197     3.44   0.001     289.7778     1066.98
                     10  |   453.9987   208.3522     2.18   0.031     43.00411    864.9932
                         |
                   _cons |  -466.2992   205.5654    -2.27   0.024    -871.7965   -60.80182
            ------------------------------------------------------------------------------
            
            .
            . local w_2= 1/_se[mvalue]
            
            .
            . di %4.3f `w_2'
            21.721
            
            .
            . est sto m2
            
            .
            .
            .
            . suest m1 m2
            
            Simultaneous results for m1, m2                            Number of obs = 200
            
            ------------------------------------------------------------------------------
                         |               Robust
                         | Coefficient  std. err.      z    P>|z|     [95% conf. interval]
            -------------+----------------------------------------------------------------
            m1_mean      |
                  mvalue |   .1898776   .0508028     3.74   0.000     .0903059    .2894492
                         |
                 company |
                      2  |   250.9496   90.30609     2.78   0.005      73.9529    427.9463
                      3  |  -51.44415   91.42052    -0.56   0.574    -230.6251    127.7368
                      4  |   169.3784   149.7128     1.13   0.258    -124.0533    462.8101
                      5  |   232.7314   172.3858     1.35   0.177    -105.1386    570.6015
                      6  |    190.568   163.0109     1.17   0.242    -128.9275    510.0634
                      7  |   234.0336   176.5098     1.33   0.185    -111.9193    579.9866
                      8  |   130.3806   150.7384     0.86   0.387    -165.0612    425.8225
                      9  |   193.4162   167.2998     1.16   0.248    -134.4853    521.3178
                     10  |   204.4981   180.3994     1.13   0.257    -149.0783    558.0745
                         |
                   _cons |  -214.8799   183.9453    -1.17   0.243     -575.406    145.6462
            -------------+----------------------------------------------------------------
            m1_lnvar     |
                   _cons |   8.918995   .3139122    28.41   0.000     8.303738    9.534251
            -------------+----------------------------------------------------------------
            m2_mean      |
                  mvalue |    .257216   .1087732     2.36   0.018     .0440245    .4704075
                         |
                 company |
                      2  |   253.9693   224.8077     1.13   0.259    -186.6457    694.5843
                      3  |   367.1194   235.1545     1.56   0.118    -93.77491    828.0136
                      4  |   409.2395    348.968     1.17   0.241    -274.7252    1093.204
                      5  |   893.5264   398.0466     2.24   0.025     113.3694    1673.683
                      6  |   462.5882   376.2297     1.23   0.219    -274.8085    1199.985
                      7  |   742.7158   405.5893     1.83   0.067    -52.22459    1537.656
                      8  |   379.3704   350.2209     1.08   0.279    -307.0499    1065.791
                      9  |   678.3791   385.7032     1.76   0.079    -77.58525    1434.343
                     10  |   453.9987   412.7726     1.10   0.271    -355.0208    1263.018
                         |
                   _cons |  -466.2992   420.2309    -1.11   0.267    -1289.937    357.3383
            -------------+----------------------------------------------------------------
            m2_lnvar     |
                   _cons |   10.79787   .2004445    53.87   0.000       10.405    11.19073
            ------------------------------------------------------------------------------
            
            .
            . lincom (`w_1'*_b[m1_mean:mvalue]) + (`w_2'*_b[m2_mean:mvalue])
            
             ( 1)  55.57279*[m1_mean]mvalue + 21.72051*[m2_mean]mvalue = 0
            
            ------------------------------------------------------------------------------
                         | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
            -------------+----------------------------------------------------------------
                     (1) |   16.13889   5.110215     3.16   0.002     6.123052    26.15473
            ------------------------------------------------------------------------------
            Last edited by Andrew Musau; 21 Jul 2024, 09:20.

            Comment


            • #7
              Andrew Musau Thanks, Andrew. This is very helpful. I'm confused about why you first run an
              Code:
               areg
              with the
              Code:
              absorb
              function followed by a regular regression.

              Also, out of curiosity since I tried several ways to do this but kept failing. If this were to be done manually in stata, how would one do it?

              Comment


              • #8
                Originally posted by Scott Rick View Post
                Andrew MusauI'm confused about why you first run an
                Code:
                 areg
                with the
                Code:
                absorb
                function followed by a regular regression.
                The areg commands are commented out. Since your code does estimations using areg, my point is to show you that the same models can be estimated using regress, explicitly adding the absorbed variable as indicators on the RHS.

                Also, out of curiosity since I tried several ways to do this but kept failing. If this were to be done manually in stata, how would one do it?
                For the third time, the whole point of my approach is so that you don't do this manually. If you are getting errors, what you should do is to show exactly what commands you ran and what error messages you got from Stata. Then we can take it up from there.

                Comment


                • #9
                  Andrew Musau I apologise for the delay in my response. Thank you very much for this!

                  I attempted to alter my code to let stata calculate the p-values, as you suggested. I have highlighted in bold below the changes I made to the original code. However, I have run into the following issues and would very much appreciate your help here:

                  1. Error r(322) - m_recovered_1_b was estimated with a nonstandard vce (robust). I assume this is on account of the suest command, but I'm not sure how to resolve this
                  2. How can I amend the "lincom" code line (below) to aggregate across all my stored values? I have between 45-60 different stored values across the multiple datasets that I run this code on

                  Code:
                  lincom (`w_1'*_b[m1_mean:mvalue]) + (`w_2'*_b[m2_mean:mvalue])
                  Code:
                  
                  capture program drop runEventStudies
                  
                  program define runEventStudies
                      
                      args data yList name
                      
                  
                      use "${outdir}\\`data'.dta", clear
                      
                      
                   
                        * Run regressions    - include test rates
                      foreach y of local yList {
                          
                          disp "`y'"
                          
                          forvalues i = 1/`N' {
                              disp "`i'"
                              
                              * Balanced T, with FEs to control for week
                              reg `y' post treated_post tested i.weeknum i.state1 if eventBlock == `i'  ///
                                        //mine is not a balanced panel - restrict analysis to make it balanced
                              local b_`y'_`i'_b = _b[treated_post]
                              local se_`y'_`i'_b = _se[treated_post]  
                              local w_`y'_`i'_b=1/_se[treated_post]  
                          
                              
                              }
                          }
                      
                              
                  
                  *** Step 2: Construct a dataset saving all coefficients and SEs
                  
                  *keep Treated_State adopt_week eventBlock cem_varlist
                      qui duplicates drop
                  
                      foreach suff in b  {
                          
                          foreach y of local yList {
                                  
                              qui gen b_`y'_`suff' = ""
                              qui gen se_`y'_`suff' = ""
                              qui gen w_`y'_`suff' = ""
                          
                              forvalues i = 1/`N' {
                                  
                                  qui replace b_`y'_`suff' = "`b_`y'_`i'_`suff''" if eventBlock == `i'
                                  qui replace se_`y'_`suff' = "`se_`y'_`i'_`suff''" if eventBlock == `i'
                                  qui replace w_`y'_`suff' = "`w_`y'_`i'_`suff''" if eventBlock == `i'
                                  }
                              
                              qui destring b_`y'_`suff', replace
                              qui destring se_`y'_`suff', replace
                              qui destring w_`y'_`suff', replace
                              di %4.3f `w_`y'_`suff''
                              
                              est sto m_`y'_`suff'
                              suest m_`y'_`suff'
                              
                              lincom (`w_1'*_b[m1_mean:mvalue]) + (`w_2'*_b[m2_mean:mvalue])
                              
                              }
                          }
                   
                      
                      drop eventBlock
                      
                          * Save the data
                          foreach x in Treated_State {
                          encode `x', gen(temp)
                          drop `x'
                          rename temp `x'
                          }
                   
                       * save
                      order state weeknum cem_varlist
                      sort state weeknum
                      save "${coeffdir}\coefficients_`name'.dta", replace
                      
                      
                  end
                  
                  
                  
                  ////////////////////////////////////////////////////////////////////////////////
                  ////////////                                                        ////////////
                  ////////////             PART 2. Run the event studies                ////////////
                  ////////////                                                        ////////////
                  ////////////////////////////////////////////////////////////////////////////////
                  
                  
                  * Acquired    
                  runEventStudies  "1matched_cohort"                                        /// Data
                                  `"confirmed_1 recovered_1 deceased_1"'                     /// yList
                                   "one"                                                    /// name
                    
                  
                  runEventStudies  "2matched_cohort"                                        /// Data
                                  `"confirmed_2 recovered_2 deceased_2"'                     /// yList
                                   "two"                                                    /// name

                  Comment


                  • #10
                    Originally posted by Scott Rick View Post
                    [USER="4687"]

                    I attempted to alter my code to let stata calculate the p-values, as you suggested. I have highlighted in bold below the changes I made to the original code. However, I have run into the following issues and would very much appreciate your help here:

                    1. Error r(322) - m_recovered_1_b was estimated with a nonstandard vce (robust). I assume this is on account of the suest command, but I'm not sure how to resolve this
                    Estimate the models without -vce(robust)-. When suest combines the estimates, it outputs a robust VCE. This is explicit in the command's documentation:

                    The estimators should be estimated without vce(robust) or vce(cluster clustvar) options. suest returns the robust VCE, allows the vce(cluster clustvar) option, and automatically works with results from the svy prefix command (only for vce(linearized)).


                    2. How can I amend the "lincom" code line (below) to aggregate across all my stored values? I have between 45-60 different stored values across the multiple datasets that I run this code on

                    lincom (`w_1'*_b[m1_mean:mvalue]) + (`w_2'*_b[m2_mean:mvalue])
                    Store the weights as scalars and then consider this:

                    Code:
                    local command (scalar(w1)*_b[m1_mean:mvalue])
                    forval i=2/20{
                        local command "`command'  + (scalar(w`i')*_b[m`i'_mean:mvalue])"
                    }
                    di "`command'"
                    Res.:

                    Code:
                    . di "`command'"
                    (scalar(w1)*_b[m1_mean:mvalue])  + (scalar(w2)*_b[m2_mean:mvalue])  + (scalar(w3)*_b[m3_mean:mvalue])  + (scalar(w4)*_b[m4_mean:mvalue
                    > ])  + (scalar(w5)*_b[m5_mean:mvalue])  + (scalar(w6)*_b[m6_mean:mvalue])  + (scalar(w7)*_b[m7_mean:mvalue])  + (scalar(w8)*_b[m8_me
                    > an:mvalue])  + (scalar(w9)*_b[m9_mean:mvalue])  + (scalar(w10)*_b[m10_mean:mvalue])  + (scalar(w11)*_b[m11_mean:mvalue])  + (scalar
                    > (w12)*_b[m12_mean:mvalue])  + (scalar(w13)*_b[m13_mean:mvalue])  + (scalar(w14)*_b[m14_mean:mvalue])  + (scalar(w15)*_b[m15_mean:mv
                    > alue])  + (scalar(w16)*_b[m16_mean:mvalue])  + (scalar(w17)*_b[m17_mean:mvalue])  + (scalar(w18)*_b[m18_mean:mvalue])  + (scalar(w1
                    > 9)*_b[m19_mean:mvalue])  + (scalar(w20)*_b[m20_mean:mvalue])
                    So you just enter this as

                    Code:
                    lincom `command'

                    Comment


                    • #11
                      Andrew Musau Thank you for that.

                      Estimate the models without -vce(robust)-. When suest combines the estimates, it outputs a robust VCE. This is explicit in the command's documentation: The estimators should be estimated without vce(robust) or vce(cluster clustvar) options. suest returns the robust VCE, allows the vce(cluster clustvar) option, and automatically works with results from the svy prefix command (only for vce(linearized)).
                      I'm confused here since vce(robust) is not anywhere in my code in #9 - not in the models, nor with suest. What am I missing?


                      2. I amended the code from #9 to the below. However, I get error "r(111) - w_confirmed_1_b not found". I don't understand this since I do see w_confirmed_1_b in the dataset and the display command gives me the output below.



                      Code:
                      Output:
                      
                      (scalar(w_confirmed_1_b)*_b[m_confirmed_1_b_mean:treated_post])  + (scalar(w_y'_b2)*_b[m_confirmed_1_b2_mean:mvalue])  + (scalar(w_y'_b3)*_b[m_con
                      > firmed_1_b3_mean:mvalue])  + (scalar(w_y'_b4)*_b[m_confirmed_1_b4_mean:mvalue])  + (scalar(w_y'_b5)*_b[m_confirmed_1_b5_mean:mvalue])  + (scalar
                      > (w_y'_b6)*_b[m_confirmed_1_b6_mean:mvalue])  + (scalar(w_y'_b7)*_b[m_confirmed_1_b7_mean:mvalue])  + (scalar(w_y'_b8)*_b[m_confirmed_1_b8_mean:m
                      > value])  + (scalar(w_y'_b9)*_b[m_confirmed_1_b9_mean:mvalue])  + (scalar(w_y'_b10)*_b[m_confirmed_1_b10_mean:mvalue])  + (scalar(w_y'_b11)*_b[m_
                      > confirmed_1_b11_mean:mvalue])  + (scalar(w_y'_b12)*_b[m_confirmed_1_b12_mean:mvalue])  + (scalar(w_y'_b13)*_b[m_confirmed_1_b13_mean:mvalue])  +
                      >  (scalar(w_y'_b14)*_b[m_confirmed_1_b14_mean:mvalue])  + (scalar(w_y'_b15)*_b[m_confirmed_1_b15_mean:mvalue])  + (scalar(w_y'_b16)*_b[m_confirme
                      > d_1_b16_mean:mvalue])  + (scalar(w_y'_b17)*_b[m_confirmed_1_b17_mean:mvalue])  + (scalar(w_y'_b18)*_b[m_confirmed_1_b18_mean:mvalue])  + (scalar
                      > (w_y'_b19)*_b[m_confirmed_1_b19_mean:mvalue])  + (scalar(w_y'_b20)*_b[m_confirmed_1_b20_mean:mvalue])
                      Code:
                      *** Step 2: Construct a dataset saving all coefficients and SEs
                      
                      *keep Treated_State adopt_week eventBlock cem_varlist 
                          qui duplicates drop
                      
                          foreach suff in b {
                              
                              foreach y of local yList {
                                      
                                  qui gen b_`y'_`suff' = ""
                                  qui gen se_`y'_`suff' = ""
                                  qui gen t_`y'_`suff' = ""
                                  qui gen df_`y'_`suff' = ""
                                  qui gen w_`y'_`suff' = ""
                              
                                  forvalues i = 1/`N' {
                                      
                                      qui replace b_`y'_`suff' = "`b_`y'_`i'_`suff''" if eventBlock == `i'
                                      qui replace se_`y'_`suff' = "`se_`y'_`i'_`suff''" if eventBlock == `i'
                                      qui replace t_`y'_`suff' = "`t_`y'_`i'_`suff''" if eventBlock == `i'
                                      qui replace df_`y'_`suff' = "`df_`y'_`i'_`suff''" if eventBlock == `i'
                                      qui replace w_`y'_`suff' = "`w_`y'_`i'_`suff''" if eventBlock == `i'
                                      }
                                  
                                  qui destring b_`y'_`suff', replace
                                  qui destring se_`y'_`suff', replace
                                  qui destring t_`y'_`suff', replace
                                  qui destring df_`y'_`suff', replace
                                  qui destring w_`y'_`suff', replace
                                  di %4.3f `w_`y'_`suff''
                                  
                                  est sto m_`y'_`suff'
                                  suest m_`y'_`suff'
                                  
                                  
                                  
                                  local command (scalar(w_`y'_`suff')*_b[m_`y'_`suff'_mean:treated_post])
                                  forval i=2/20{
                                  local command "`command'  + (scalar(w_y'_`suff'`i')*_b[m_`y'_`suff'`i'_mean:mvalue])"
                                  }
                                  di "`command'"
                                  
                                  lincom `command'
                                  }
                                  
                              }
                      
                      * Acquired    
                      runEventStudies  "1matched_cohort"                                        /// Data
                                      `"confirmed_1 recovered_1 deceased_1"'                     /// yList
                                       "one"                                                    /// name

                      Comment


                      • #12
                        At this point, I suggest that you seek help from a colleague competent in Stata coding or wait for someone else to volunteer, as I won't be able to provide the level of help you are asking for. My examples are clear and do not involve explicitly storing any coefficients or standard errors, only the weights. Note that estimates store in my code already does this. What you show does not even include an estimation command to determine whether or not a robust VCE was used. You can direct the colleague to this thread if they want to pursue my suggestions.

                        Comment


                        • #13
                          I think you should look at metan. The effect size of each study (or model in your case) is weighted by the inverse of its variance, so it does what you are trying to do.

                          In the future, I recommend you create a toy dataset that people here can actually run and study. You've given us dataex that does not permit the code to run, which is pointless. The problem should be simple to understand and without unnecessary coding. Here, aggregating 2 models is the same as aggregating 100.

                          Comment


                          • #14
                            Code:
                            clear all
                            
                            set obs 1000
                            g id = _n
                            g fe = runiform()
                            g treated = rbinomial(1,0.3)
                            g group = int(runiform(1,11))
                            
                            expand 30
                            bys id: g time = _n
                            
                            g post = time>20
                            
                            g y = .
                            forv i = 1/10 {
                                replace y = fe + 1*post + (2+`i'/10)*post*treated + rnormal() if group==`i'
                            }
                            
                            matrix R = J(10,2,.)
                            forv i = 1/10 {
                                qui reg y c.treated#c.post post treated if group==`i', absorb(id) robust
                                matrix R[`i',1] = r(table)[1,1]
                                matrix R[`i',2] = r(table)[2,1]
                            }
                            
                            clear
                            svmat R
                            meta set R1 R2
                            meta summarize, random
                            meta summarize, fixed
                            meta summarize, common

                            Comment


                            • #15
                              Andrew Musau Hi Andrew. You are right, I did not clean up my original code and kept on adding on which simply complicated things. I apologize for this.

                              I have cleaned up the code to only reflect what you suggested. This solved the robust VCE issue. The only place where I am tripping up is the storing of the weights where I get the error "r(111) - w_confirmed_1_1 not found". When I break down your code and run the final lincom bit manually on just a few of the regressions and the associated weights, it works just fine. However, when I run it on the entire set using the code below, it appears the weights are not being stored. I do see the weights under each regression, so I know they are being generated.

                              I understand that you have helped me out a lot, but if you can help me with this last bit I would ne very grateful!

                              George Ford Thank you. I apologize, I did not realize that the dataex sample did not allow the code to run. I should've checked that- that's completely on me! I do appreciate the alternative, and if I can't figure out lincom, will switch to metan



                              Code:
                              
                                  capture program drop runEventStudies
                              
                                  program define runEventStudies
                                      
                                      args data yList name
                                      
                              
                                      use "${outdir}\\`data'.dta", clear
                                      
                                             
                                      
                                      qui sum eventBlock
                                      local N = r(max)
                                   
                                      disp "TOTAL: `N'"
                                   
                                   
                                      * Run regressions    - include test rates
                                      foreach y of local yList {
                                          
                                          disp "`y'"
                                          
                                          forvalues i = 1/`N' {
                                              disp "`i'"
                                              
                                              * Balanced T, with FEs to control for week 
                                              reg `y' post treated_post tested i.weeknum i.state1 if eventBlock == `i' 
                                              local w_`y'_`i'= 1/_se[treated_post]  
                                              di %4.3f `w_`y'_`i''
                                              est sto m_`y'_`i'
                                              
                                              
                                              suest m_`y'_`i'
                                                             
                                          }
                                      }
                              
                                          
                                      foreach y of local yList {    
                                          
                                          local command (scalar(w_`y'_1)*_b[m_`y'_1_mean:treated_post])
                                          
                                          forval j=2/`N'    {
                                          disp "`j'"
                                          local command "`command'  + (scalar(w_`y'_`j')*_b[m_`y'_`j'_mean:treated_post])"
                                          }
                                      di "`command'"
                                          
                                          lincom `command'
                                              
                                              
                                          }
                                          
                              
                                  end
                              
                              
                              
                                  ////////////////////////////////////////////////////////////////////////////////
                                  ////////////                                                        ////////////
                                  ////////////             PART 2. Run the event studies                ////////////
                                  ////////////                                                        ////////////
                                  ////////////////////////////////////////////////////////////////////////////////
                              
                              
                                  * Acquired    
                                  runEventStudies  "1matched_cohort"                                        /// Data
                                                  `"confirmed_1 recovered_1 deceased_1"'                     /// yList
                                                   "one"                                                    /// name
                              
                              
                              
                              
                              
                              ***********SAMPLE DATA***********
                              
                              
                              input float(confirmed_1 recovered_1 deceased_1 post treated_post tested) byte weeknum long state1 float eventBlock
                                 .23389708   .07607053  .003094638 1 0  6.429327 22 1  3
                                  .6372796    .3602375   .00734077 1 0  7.283556 24 1  3
                                .008312343  .003418496           0 0 0 .23756747  8 1  6
                                .008312343  .008312343           0 1 0 1.2649873 11 1  6
                                 1.0441525    .9825477  .014213745 1 0  19.46225 33 1 11
                                 1.1517812   1.0988845   .01536524 1 0 27.654156 37 1  6
                                 1.2578626    1.238647  .015617128 1 0  54.43516 47 1  8
                                .008780137  .008312343           0 1 0  2.461749 14 1  9
                                .008312343  .008312343           0 1 0 1.2649873 11 1  9
                                .008312343  .008312343           0 1 0 2.0354803 13 1  9
                                .004713926  .002770781           0 0 0 .09640159  7 1 10
                                  .7967974    .6804606  .011694854 1 0  8.805542 26 1  3
                                 .03573228   .01925153           0 1 0   4.38136 18 1  6
                                .002626844 .0007196834           0 0 0         0  5 1  6
                                  .6372796    .3602375   .00734077 1 0  7.283556 24 1 11
                                 1.0160489    .9537243  .013889888 1 0 17.842175 32 1 13
                                 1.2471393    1.222598  .015617128 1 0  47.19406 44 1 12
                              .00025188917           0           0 0 0         0  3 1  2
                                 .09913638   .04893847   .00043181 1 0  5.770313 21 1  7
                                .002626844 .0007196834           0 0 0         0  5 1  1
                                .002806765   .00259086           0 0 0 .05048579  6 1 10
                                 1.0160489    .9537243  .013889888 1 0 17.842175 32 1  6
                                 1.2323138   1.1967255   .01554516 1 0  42.51036 42 1 13
                                .008312343  .008312343           0 1 0         0 10 1  6
                                 1.1014034   1.0445844  .015077366 1 0  23.22494 35 1 11
                                  .4445124   .19399065  .005325656 1 0  6.938575 23 1 13
                                 1.2426412    1.211839  .015617128 1 0  45.15088 43 1  1
                                .002806765   .00259086           0 0 0 .05048579  6 1  6
                                 .04303706   .03004678           0 1 0  4.748615 19 1  9
                                   .013638  .009859662           0 1 0  3.484779 16 1 13
                                 .03573228   .01925153           0 1 0   4.38136 18 1 10
                                  .3594099   .35465994 .0044620363 1 0 16.013422 48 1  9
                                  .8549838    .7644117   .01270241 0 0  10.32965 27 1  8
                                .010759266 .0084562795           0 1 0  3.002123 15 1  5
                                  .7401943    .5493343   .00942785 1 0  7.820691 25 1 12
                                .004713926  .002770781           0 0 0 .09640159  7 1 13
                                  .9310543    .8765743  .013098237 1 0 13.269522 29 1  3
                                 1.0441525    .9825477  .014213745 1 0  19.46225 33 1  9
                                 .09913638   .04893847   .00043181 1 0  5.770313 21 1  3
                                  .4445124   .19399065  .005325656 1 0  6.938575 23 1  9
                                .008780137  .008312343           0 1 0  2.461749 14 1 11
                                  .9869018    .9267722  .013602016 1 0 16.150414 31 1 10
                                .008312343  .007916517           0 0 0         0  9 1  8
                                 1.2028787   1.1686219   .01536524 1 0 35.946453 40 1  9
                                .008312343  .008312343           0 1 0  1.872976 12 1  2
                                .004713926  .002770781           0 0 0 .09640159  7 1  9
                                 1.1281036   1.0713927  .015185318 1 0  25.49417 36 1 13
                                 .02400144  .011982728           0 1 0  3.956459 17 1  2
                               .0024469234           0           0 0 0         0  4 1  8
                                 1.1014034   1.0445844  .015077366 1 0  23.22494 35 1 10
                                .008312343  .008312343           0 1 0         0 10 1 13
                                 1.2578626    1.238647  .015617128 1 0  54.43516 47 1 11
                                  .6372796    .3602375   .00734077 1 0  7.283556 24 1  1
                                .010759266 .0084562795           0 1 0  3.002123 15 1  9
                                  .8549838    .7644117   .01270241 1 0  10.32965 27 1  4
                                 1.1746311    1.124973   .01536524 1 0  30.29748 38 1  9
                                .002626844 .0007196834           0 0 0         0  5 1  5
                                 .04303706   .03004678           0 1 0  4.748615 19 1  5
                                .002626844 .0007196834           0 0 0         0  5 1  7
                                 .23389708   .07607053  .003094638 1 0  6.429327 22 1  6
                                  .4445124   .19399065  .005325656 1 0  6.938575 23 1  2
                                 1.2323138   1.1967255   .01554516 1 0  42.51036 42 1  4
                                 .05505577   .03983447           0 0 0  5.203814 20 1  1
                                 1.2512414   1.2304786  .015617128 1 0  49.57629 45 1  9
                                 1.2471393    1.222598  .015617128 1 0  47.19406 44 1  4
                                  .7401943    .5493343   .00942785 1 0  7.820691 25 1 10
                                 1.1281036   1.0713927  .015185318 1 0  25.49417 36 1  3
                                .002806765   .00259086           0 0 0 .05048579  6 1  1
                                 1.0441525    .9825477  .014213745 1 0  19.46225 33 1  5
                                .008312343  .008312343           0 0 0 2.0354803 13 1  1
                                  .7967974    .6804606  .011694854 1 0  8.805542 26 1  6
                                  1.256423    1.234113  .015617128 1 0  52.06128 46 1  8
                                  .8549838    .7644117   .01270241 1 0  10.32965 27 1 10
                                  .9005398     .834041   .01302627 1 0  11.85135 28 1 11
                               .0024469234           0           0 0 0         0  4 1 11
                                 1.2471393    1.222598  .015617128 1 0  47.19406 44 1  5
                                  .9619647     .904462  .013314142 1 0  14.64822 30 1 13
                                .008312343  .008312343           0 1 0         0 10 1 12
                                 .02400144  .011982728           0 1 0  3.956459 17 1 13
                               .0024469234           0           0 0 0         0  4 1  9
                                 1.1281036   1.0713927  .015185318 1 0  25.49417 36 1 12
                                .008312343  .008312343           0 1 0 1.2649873 11 1 11
                                 .04303706   .03004678           0 0 0  4.748615 19 1  1
                                  .7401943    .5493343   .00942785 1 0  7.820691 25 1  1
                                 .03573228   .01925153           0 1 0   4.38136 18 1  2
                                .008780137  .008312343           0 1 0  2.461749 14 1 12
                                .010759266 .0084562795           0 1 0  3.002123 15 1  4
                                 .09913638   .04893847   .00043181 1 0  5.770313 21 1  4
                                   .013638  .009859662           0 1 0  3.484779 16 1 12
                                 1.0160489    .9537243  .013889888 1 0 17.842175 32 1  7
                                .008312343  .008312343           0 1 0 2.0354803 13 1 13
                                  .9619647     .904462  .013314142 1 0  14.64822 30 1 12
                                .008312343  .008312343           0 1 0 2.0354803 13 1 12
                                 1.0441525    .9825477  .014213745 1 0  19.46225 33 1  2
                                .002626844 .0007196834           0 0 0         0  5 1  2
                                 .04303706   .03004678           0 1 0  4.748615 19 1 11
                                .010759266 .0084562795           0 1 0  3.002123 15 1 11
                                 1.1281036   1.0713927  .015185318 1 0  25.49417 36 1  2
                                  .9005398     .834041   .01302627 1 0  11.85135 28 1  4
                                   .013638  .009859662           0 1 0  3.484779 16 1  6
                              end

                              Comment

                              Working...
                              X