Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Summary for logic behind basic statistics inferences

    The central limit theory is the basis for statistic inferential procedure for means. But there are other statistic effect variables, like risk, odd, incidence rate, survival, hazard rate, etc. These variables have common with the means.

    mean = sum / number of observations

    similarly,

    risk = sum of events / (total) number of observations

    odd = sum of events / [total number of observations - number of observations with event]

    incidence rate = sum of events / total time at risk, where total time at risk equals sum of each observation's time at risk until event occurs

    survival = risk of survival at a specific time

    hazard rate's definition is somewhat complicated

    These common with means make it possible to make statistic inference in the similar manner as that for means.

    So, any find any good reference for that?

    Above is with regard to the statistic inference for confidence intervals. If we want to make comparison between two independent samples and their respective population, we are trying to make statistic inference for two samples.

    In this area, we make comparison by two methods: subtraction, or division.

    For instance, we make inference for mean1 - mean2, risk1 - risk2, odd1 -odd2, incidence rate1 - incidence rate2, etc. And we also could make by mean1 / mean2, risk1 / risk2, odd1 / odd2, incidence rate1 / incidence rate2, etc.

    From my experience, we transform risk1 / risk2 and odd1 / odd2 into natural log form, after which we transfer the comparison via division into comparison via subtraction. Like,

    log(risk1 / risk2) = logrisk1 - logrisk2

    log(odd1 / odd2) = logodd1 - logodd2
    Last edited by Tom Hsiung; 12 May 2018, 06:08.
Working...
X