Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Carlo Lazzaro
    replied
    Pietro:
    -i.YEAR- should be a categorical variable (such as 2011, 2012 ans so forth) that identifies the panel time variable.
    The problem with -i.TIME- is that it reduces interaction to zero.
    Try to add it outside the interaction:
    Code:
    xtset firmID year
    xtreg MATCHING i.IFRS##i.YEAR i.TIME <othercontrols>, fe

    Leave a comment:


  • Pietro Fera
    replied
    Originally posted by Carlo Lazzaro View Post
    Pietro:
    I see your concern.
    What if you replace -i.TIME- with -i.YEAR- in your code (and get rid of -i.TIME-):
    Code:
    xtset firmID year
    xtreg MATCHING i.IFRS##i.YEAR <othercontrols>, fe
    What would represent the variable YEAR in this case? It should be a continuous variable from 2001 to 2015?

    How could I catch the difference between pre and post treatment period in this way?

    I'm getting out of my mind with this.. thank you
    Last edited by Pietro Fera; 09 Jun 2017, 09:07.

    Leave a comment:


  • Carlo Lazzaro
    replied
    Pietro:
    I see your concern.
    What if you replace -i.TIME- with -i.YEAR- in your code (and get rid of -i.TIME-):
    Code:
    xtset firmID year
    xtreg MATCHING i.IFRS##i.YEAR <othercontrols>, fe

    Leave a comment:


  • Pietro Fera
    replied
    Originally posted by Carlo Lazzaro View Post
    Pietro:
    welcome to the list.
    What if you try:
    Code:
    xtset firmID year
    xtreg MATCHING i.IFRS##i.TIME &lt;othercontrols&gt;, fe
    Hi Carlo.

    Thank you for the answer.
    I see your point, but I have a problem with the variable TIME: I don't know when it has to be 1 or 0, especially for the control group.
    I mean... I think I understand that I can have different treatment period for each firm of my treatment group, correct me if I'm wrong please, but in this case if the TIME variable for the control group is always 0, the command doesn't work (look at the image).
    So.. I was thinking that the right way is that for the control group, the variable TIME should be 1 for all years from 2006 to 2015, while for the treatment group the variable TIME should be 1 only for the years in which each firm adopts the IFRS, independently from the fact that the first IFRS adoption for some firms was in 2006.
    Anyway.. I really don't know if what I'm thinking is really doable.

    Thanks for the help.
    Attached Files
    Last edited by Pietro Fera; 09 Jun 2017, 07:32.

    Leave a comment:


  • Carlo Lazzaro
    replied
    Pietro:
    welcome to the list.
    What if you try:
    Code:
    xtset firmID year
    xtreg MATCHING i.IFRS##i.TIME <othercontrols>, fe

    Leave a comment:


  • Pietro Fera
    replied
    Hello everyone,
    I'm a really beginner in the DID field and probably I'll ask "stupid" questions, but I would be grateful if you could help me.

    I have a set of many firms' accounting data from 2001 to 2015.
    I would analyze the impact of the IFRS adoption on the level of matching between revenues and expenses. This means that I have a treatment group represented by those firms that have adopted IFRS, and a control group of firms that do not use IFRS.
    So, I have these variables:
    MATCHING = dependent variable
    IFRS = dummy variable that is 1 for those firms that have adopted IFRS, and 0 otherwise
    TIME = dummy variable that is 1 for every year after the IFRS adoption, and 0 before for the years the IFRS adoption

    My problem is related to the treatment period (and so to the variable TIME) because I'm dealing with voluntary IFRS adoption and, therefore, firms have adopted IFRS in different year starting from 2006. In this case the dummy variable that represent the treatment period can vary in the treatment group (in my case, it will be 1 for each firm i that does IFRS reporting in year t), but what happens to the same time variable for the control group? I don't think that it could be right if the time variable will always be zero for the control group. In fact, when I use the STATa commands, they don't work properly. So, should the time variable (for the control group) be 1 from 2006 to 2015 (that represent the period when the treatment starts, even if not for the whole treatment group)?

    Or, in this case, the specification (Y = a0 + a1TREAT*POST + YearsDummies + FirmsAttributes) is not correct anymore and I should do something different?

    Thanks a lot for the answer!

    Leave a comment:


  • Carlo Lazzaro
    replied
    Marisa:
    as this thread has a little to do with the one started by the original poster, for the future please start a new thread. Thanks.
    The habit of adding a "small" (unortunately, an unsatisfactory qualitative statement for shed light on a quantitative issue) to avoid missing values when the raw value of the variable to be logged is <=0 is difficult to justify.
    When I'm forced/urged to log the original variable, I do not add anything and bear the consequence of that choice in terms of a reduced sample size.
    No need to say that any imputation practice is, in this case, out of debate.

    Leave a comment:


  • Marisa Foraci
    replied
    Dear all,

    I am having a doubt over a -xtset regression run for a panel dataset. In order to explain unit change of my predictors in percent change of my outcome variable I am transforming it in its logged version. Now the Manual I am using as a reference for the DID syntax, transforms variables into logs using the function
    gen newvar= ln(1+x), while I usually transform variables into their log versions by using gen newvar= ln(x). When comparing the means of the two logs I get the following:


    Mean estimation Number of obs = 6380

    -------------------------------------------------------------------------------
    Mean Std. Err. [95% Conf. Interval]

    ln .9707089 .0090013 .9530633 .9883545
    ln1plusx 1.344655 .0060332 1.332828 1.356482

    The mean is higher, the standard error is lower and the conf. interval narrower.
    Any hint on why one form could preferred to the other?

    Thank you in advance

    Leave a comment:


  • Clyde Schechter
    replied
    Are there any situations where you would like to keep firm fixed effects and still include TREAT?
    If we are still talking about the situation presented at the beginning of this thread, it is not a question of whether one would like to keep firm fixed effects and still include TREAT. It is simply not possible.

    Leave a comment:


  • Henrik Dalriksson
    replied
    Clyde Schechter

    Are there any situations where you would like to keep firm fixed effects and still include TREAT?

    Are there any arguments for why you would like to keep TREAT in this particular regression when you still have firm fixed effects?
    Last edited by Henrik Dalriksson; 29 May 2017, 13:54.

    Leave a comment:


  • Clyde Schechter
    replied
    That question is beyond the scope of my knowledge and expertise. It depends on whether the rate of growth in GDP is related to your outcome variable in an additive or multiplicative way. That's not a statistics question, it's a question in your discipline. If I were you I'd ask a colleague in the discipline about that.

    Leave a comment:


  • dupont john
    replied
    Thanks clyde!


    Also another I was worried about, if I have GDP growth in my regression with is already is percentage should I still put a log for this variable? even if it is already in percentage?

    Thanks a lot!

    JD

    Leave a comment:


  • Clyde Schechter
    replied
    No, it is not an obligation. You need some pre-treatment-era and some post-treatment-era data in both groups. To the extent you have more of it (but not extending over time periods so long that relevant conditions not accounted for in your model change) you will get more precise estimates of the effect of treatment.

    Assuming that outcome variation is homogeneous over time, the most efficient design for the same total amount of data would be to have equal durations before and after the change-point. That may well be why people commonly chose to use equal durations of observation before and after. But you can perfectly well do it with more on one side of the change-point than the other. After all, maximizing efficiency for the same total amount of data is only optimal if data from all time periods is equally easy to get.

    Leave a comment:


  • dupont john
    replied
    Hi Clyde,


    Another Issue I am worried is whether I should have the same amount of year before and after my "Treatment" takes place? For example I saw that papers usually use 10 year before and 10 year after. Is this an obligation when we use a difference in differences model?

    Thanks!!


    Best,

    JD

    Leave a comment:


  • Clyde Schechter
    replied
    My understanding of the difference between fixed and random effects estimators has nothing to do with endogeneity. As I understand it, for the random effects estimator to be consistent requires the assumption that the panel-level effects are independently and identically normally distributed, and they are independent of the covariates specified in the model. These assumptions are, in practice, sometimes false. The fixed effects estimator does not require these stringent assumptions and is more broadly consistent. However, the random effects estimator is more efficient (i.e. produces more precise estimates of the model coefficients) if it is consistent--which is why it is preferable to use it when it is not inconsistent. The Hausman test compares the results of the two models, and this indirectly tests whether the assumptions necessary for consistency of the random effects estimator are met.

    I should add that in economics there is, from what I have seen, a strong preference for consistent estimators and less regard for efficiency, so that random effects models are nearly always rejected if they do not pass the Hausman test. In some other fields, the traditions are different and if the results of the two estimators appear reasonably similar, a random effects model will be used even if it fails the Hausman test. (Particularly if the sample size is very large, so that the Hausman test has power to pick up tiny but immaterial departures from the assumptions.)

    Leave a comment:

Working...
X