Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Creating variables from stored FGT poverty measures

    Dear All,
    I just constructed the following poverty measures
    Code:
                
    group(zon e1 year)  
            a=0           a=1       a=2
                
    1    0.45051    0.15565    0.07424
    2    0.57981    0.22978    0.12236
    3    0.39341    0.15794    0.08509
    4    0.79021    0.42030    0.26352
    5    0.45631    0.17127    0.08639
    6    0.55371    0.21982    0.11454
    7    0.52296    0.19020    0.09092
    8    0.55920    0.22276    0.11705
    9    0.33838    0.12851    0.06735
    10    0.61907    0.25688    0.13994
    11    0.32306    0.10053    0.04533
    12    0.57190    0.22638    0.11925
    13    0.29774    0.09968    0.04957
    14    0.51338    0.18552    0.09115
    15    0.42791    0.16115    0.08363
    16    0.67895    0.30197    0.17228
    17    0.31564    0.11616    0.05968
    18    0.50939    0.17982    0.08679
    19    0.31267    0.13610    0.08576
    20    0.58398    0.23640    0.12305
    21    0.45392    0.18429    0.10000
    22    0.65345    0.31652    0.19166
    23    0.61066    0.28144    0.16747
    24    0.67679    0.28936    0.15706
    25    0.35864    0.13116    0.06538
    26    0.57635    0.22272    0.11029
    27    0.50073    0.20173    0.10729
    28    0.77152    0.39423    0.24309
    29    0.63412    0.25309    0.13067
    30    0.77116    0.35862    0.20526
    31    0.54763    0.21473    0.11026
    32    0.65842    0.25118    0.12816
    33    0.42415    0.14907    0.07161
    34    0.77879    0.36587    0.21069
    35    0.67673    0.27356    0.14352
    36    0.75806    0.35490    0.20447
    obtained from six regions over six-year period.
    I wan to create three variables: p0,p1 and p2. First, is important I clarify the structure of the results. 0.45051 (italised) is the headcount index of poverty in region i, first region (where i= 1, 2, ...., 6) at date t, first year (where t=1,2, ..., 6); 0.52296 (bolded under group 7) is the index in first region and in the second year. Note, there are 1,378 individual units observations such that the 45% is the poverty rate in the region at the given time. The values for p0 that one is intending to create should be 0.45051 for all the 1,378 individual units from the region at the time.

    Help is much appreciated.

    Thank you,
    Dapel

  • #2
    I suggest that you (1) sort this output by region and year, and then (2) list region year p0 p1 p2, sepby(region). The structure of your data would then be much clearer than above. If you have region and year variables in your individual-level data set, then you can use merge to attach these estimates shown above (saved to a data set) to each observation as appropriate using region and year as the match keys.

    Comment


    • #3
      Thank you very much Prof Jenkins. The results are already as required in (1) of your post. Now I have performed (2), we have
      Code:
      . list zone year p0 p1 p2, sepby(zone)
      
      +----------------------------------------+
      zone   year       p0       p1       p2 
      ----------------------------------------
      1.     1   1980   .00051   .00041   .00036 
      2.     1   1985    .0006   .00051   .00045 
      3.     1   1992   .00026    .0002   .00018 
      4.     1   1996   .00018   .00019   .00022 
      5.     1   2004   .00062   .00054   .00049 
      6.     1   2010     .001   .00092   .00086 
      ----------------------------------------
      7.     2   1980   .01111   .01036   .01002 
      8.     2   1985   .01685   .01579   .01507 
      9.     2   1992   .01625   .01382   .01233 
      10.     2   1996     .017   .01619   .01531 
      11.     2   2004   .02626   .02388   .02148 
      12.     2   2010   .03952   .03547   .03288 
      ----------------------------------------
      13.     3   1980   .01796   .01696    .0166 
      14.     3   1985    .0138   .01233   .01174 
      15.     3   1992    .0258   .02286   .02155 
      16.     3   1996   .01941   .01854   .01828 
      17.     3   2004     .028   .02654   .02564 
      18.     3   2010   .01736   .01435   .01252 
      ----------------------------------------
      19.     4   1980    .0612   .07659   .08723 
      20.     4   1985   .02999   .02928   .02898 
      21.     4   1992   .04813   .05036   .05219 
      22.     4   1996   .02052   .02338   .02572 
      23.     4   2004   .02708   .03255   .03646 
      24.     4   2010    .0657   .07262   .07596 
      ----------------------------------------
      25.     5   1980   .02446   .02161    .0198 
      26.     5   1985   .01602   .01173    .0096 
      27.     5   1992   .01771   .01534   .01431 
      28.     5   1996   .04117   .04464   .04826 
      29.     5   2004   .03977   .03734   .03502 
      30.     5   2010   .05081   .04833   .04606 
      ----------------------------------------
      31.     6   1980   .03905   .03647   .03452 
      32.     6   1985   .03109   .02895    .0277 
      33.     6   1992   .04749   .03944   .03458 
      34.     6   1996   .04705   .04733   .04667 
      35.     6   2004   .04926    .0539   .05604 
      36.     6   2010   .09102   .10025   .10492 
      +----------------------------------------+
      .
      Problem now solved with good results
      Code:
        Result                           # of obs.
          -----------------------------------------
          not matched                             0
          matched                            97,646  (_merge==3)
          -----------------------------------------
      .

      Once again, thanks a million.
      Dapel

      Comment


      • #4
        Dear all, i want to calculate p0, p1, and p2. My data is cross section year 2014, households and individuals survey data. my variabels are percapita expenditure, province, age, agesquare, education, gender, landownership. how can i calculate P0, P1, and P2? because those index will be used as outcomes variabel to running propensity score matching.

        Comment

        Working...
        X