Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Logrank test vs Cox model

    Hello,
    I am working on a project related to survival analysis and I have one categorical variable which I run a logrank test and a univariate cox model.
    The problem is that i found a different p-value when I run the Logrank test (p=0.058) comparatively to the cox model(p=0.139) for the same categorical variable.
    So my supervisor suggest me that I have to examine this difference but I dont know how and I dont understand exactly the reason.
    Any suggestions?

  • #2
    The cox model relies on the proportional hazards assumption. The logrank test does not. If your data are not consistent with the proportional hazards assumption, then the cox results may not be valid. Take a look at -help stcox diagnostics- for a few different ways to explore this.

    Comment


    • #3
      I have already checked for non-proportionality and everything seems to be ok.

      Comment


      • #4
        Well, if proportional hazards is OK, there may not be an explanation. There is no guarantee that two different analyses of the same data will produce the same results, particularly if you are viewing it through the lens of p-values. Of course, there is always the possibility that the analyses were not carried out quite properly. So I suggest that you post back showing the code and complete output of both the logrank test and the cox model estimation. (Be sure to put those in code delimiters so they are reasonable. If you are not familiar with code delimiters, read Forum FAQ #12 for details.) Note that there are no unimportant details, so to be sure that what you post shows exactly what happened, do it by copy/pasting directly from Stata's Results window or your log file.

        It would probably also help if you ran -sts graph, by(one_categorical_variable)-, exported that to .png and then attached that to the post so people can actually see the survival patterns in both groups.

        Comment


        • #5
          Well the problem is that I have a lot of .do files due to huge research so I am not sure were to look first for any mistakes.
          But I will write to you the code from the Logrank tests & the cox models.

          Code:
          gen failed = ADev
          replace failed = 0 if ID == 150
          stset ADev_tm, failure(failed)
          sts test CD4final
          xi:stcox i.CD4final
          estat phtest
          sts gr, f by(CD4final) legend(order(1 ">=500 cells/µl" 2 "350-499 cells/µl" 3 "200-349 cells/µl" 4 "<200 cells/µl") ///
          col(1) subti("CD4 levels")) xti("time") ylab(0(0.2)0.2,angle(hori)) ///
          yti("Cumulative Prob.")

          Click image for larger version

Name:	finalCD4.png
Views:	1
Size:	39.8 KB
ID:	1426732
          Last edited by Ioannis Michalopoulos; 20 Jan 2018, 13:35.

          Comment


          • #6
            Thank you for showing the code. It appears correct. But I wonder about some of the results. In particular, the graph you show does not seem consistent with the proportional hazards assumption. The graphs for 200-349, 350-499 and >= 500 all seem to be more or less the same, so a constant hazard ratio of 1 looks OK for those. But for the < 200 cells curve, it rises very steeply out to 5 years, but then runs flat. It looks to me like the hazard ratio is very high before 5 years and then actually goes to 0 abruptly at that time. This is not what proportional hazards looks like graphically. It's hard for me to understand how this past proportional hazards testing, unless perhaps the sample is just too small to properly test it.

            The other thing is, just from an epidemiologic perspective, the flattening of that < 200 cells curve seems strange. I would expect the incidence of AIDS and death to continue to rise in this group. So that leads me to wonder if there is something wrong with your data--it doesn't fit with my sense of reality in this area.

            Added: Amplifying on the above, what we see in the graph for the < 200 group is that there is very rapid mortality/incidence of AIDS out to 5 years. Then it plateaus at around 15% of that cohort. What happened to the remaining 85%? There are no further incident AIDS cases or deaths among them, even out to 15 years. That does not seem clinically plausible. So were they all censored at 5 years? In that case, I think there is something wrong with your data collection process. Why would this group suddenly be out of follow-up at 5 years, but you continue to gather data on the others? Something is definitely wrong here, and it isn't the Stata code.
            Last edited by Clyde Schechter; 20 Jan 2018, 13:51.

            Comment


            • #7
              Maybe this graph seems slightly better?
              Also the p-value from the "estat phtest" has value=0.064 which is not exactly significant.


              Click image for larger version

Name:	finalCD4.png
Views:	1
Size:	43.1 KB
ID:	1426736

              Comment


              • #8
                Well, in this graph it is harder to tell. But even here, the < 200 group plateaus out at roughly 7.5 years, while the incidence/mortality keeps climbing in the other groups. I can't say as definitively that these graphs are incompatible with proportional hazards, but they certainly aren't a very good case for it either.

                As far as a p-value of 0.064 goes, it may not be statistically significant by the arbitrary p < 0.05 criterion, but it's pretty close. It suggests to me that your data set may be too small to adequately power a test of proportional hazards, or it may be that the group that seems to be most violative of it, the < 200 group, is just too small for the test to be reliable, even though the other groups are perhaps adequate.

                And I think it is always a mistake to take p < 0.05 literally, in any context. It's an arbitrary cutpoint on a continuous variable. When you have a p-value that is close to 0.05, and you have one graph that seems completely incompatible with proportional hazards, and another graph that is ambiguous in that regard but also looks suspicious, I'd be inclined to think we have a proportional hazards violation on our hands.

                Comment


                • #9
                  You can see also the observations per group at the table below:
                  >500 350-500 200-350 <200
                  Event 11 41 13 5
                  No event 235 1317 444 49
                  Sum 246 1358 457 54

                  Comment

                  Working...
                  X