Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    this means that average illiquidty is about 0.096014 higher after the ban?
    Yes.

    do the t-statistics respectively z-statistics matter here?
    Does it give me information about significance? Or just additional information?
    They only matter if you wish to test the null hypotheses that illiquidity was zero before, or after, the ban. I don't know enough about the subject matter here to know whether those would be useful or meaningful, so I can't give you a definitive answer. I can see that in most situations, these tests are nonsensical and usually we ignore those t- and z-statistics.

    so this means that the increase in illiquidty after the ban is not significant?
    If by "significant" you mean statistically significant, then you are correct.

    meaning that there is no change in illiquidty after the ban?
    No. You are among the many who have been mis-taught what statistical significance (or the lack thereof) means. It emphatically does not mean that. The lack of statistical significance for this change means that estimating the change as best you can with this data, and taking into account its noise, you cannot determine whether this effect is positive, or negative, or zero. It is too small relative to the noise in the data to say. This may reflect very noisy data, or inadequate sample size, or a small (possibly zero) effect, or some combination of these. But in the real world, "no effect" is rarely true. These null hypotheses are usually straw men. So attributing lack of statistical significance to "no effect" should be a last resort explanation that you would use only if you have somehow excluded other problems like sample size and noisy data. In your case, the width of your confidence interval is nearly four times the main estimate, so noise and sample size would seem to be very much at work here. Looking back at your original regression, although the total number of observations, greater than 500, seems adequate at first glance, we also see that there are only 12 groups in the data. You are somewhat saved by the fact that rho in your original model is zero, so your effective sample size really is close to 572. So it sounds like this illiquidity measure is just very noisy compared to whatever the effect of the time dummy is.

    But if it were significant, would it mean that illiqduity increased about 0.906014 after the ban?
    The significance is irrelevant. Even though not statistically significant, 0.906014 is still your best estimate of the increase in liquidity after the ban. The question is how precise that estimate is. Your wide confidence interval (which also goes along with the high p-value in this case) says that it is a very imprecise estimate: it could be off by a lot, and indeed the "true" effect could be in the opposite direction! It is much better to think about effects sizes (coefficients) and precision (standard errors or width of confidence intervals) than statistical significance. In any event, statistical significance tells you nothing about the actual effect; rather it is a convoluted way of saying something about how much you know about the effect from your data.

    Statistical significance takes a complicated statistic, the p-value, that is a difficult-to-understand mish-mash of effect size, data noise, and sample size in the first place, and then makes it even more opaque by imposing an arbitrary 0.05 threshold, thereby discarding most of the scanty information it provided in the first place. Worse yet, many people, in their introductory statistics courses, are taught misinterpretations of p-values (like "if it's not significant there is no effect"), so that even if you use them correctly and avoid misinterpreting them, you are likely to be misinforming people you communicate your results to.

    I always teach my students to ignore the p-values until they have fully understood the coefficients and confidence intervals in their models. Then, if they really can't find anything better to do with their time, they can look at the p-values. But even then, I have laragely banned the phrase "statistical significance" from discourse in my seminars. (There are occasional situations where p-values are important, and where applying a threshold of statistical significance is actually helpful, but they are uncommon, and they almost never arise in this kind of modeling.)

    Comment

    Working...
    X