Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Parallel regression assumption with ologit - inconsistent result between oparallel and a single Brant test commands

    Dear all,

    For the first time I did not manage to solve my problem thanks to the previous questions, so this is my big debut (hoping that it does not mean that I crossed the border of an extreme stupidity). I am using STATA 14.

    I was supposed to do the diagnostics of the following ordered logit run on a sample of only 68 observations and 5 categories with a very uneven number of observations in the dependent variable:
    Click image for larger version

Name:	Capture.PNG
Views:	1
Size:	73.9 KB
ID:	1399781



    As expected, the tests I performed (even though they cannot be viable given the size of the sample I agree) with the oparallel command suggest the violation of the parallel regression assumption.However, while performing Brant test alone to get the details (from the recommened here spost13_ado package), the overall result is absurd and does not correspond to the one obtained by oparallel.
    Click image for larger version

Name:	tests1.png
Views:	1
Size:	21.8 KB
ID:	1399782




    After regrouping the categories of the dependent variable (three highest scores together), the two commands yield the same results (see below). Please note that the results of the binary logits in the details of the Brant test below correspond to the first two categories in the details above, which shows that this part worked as the two categories did not change.
    Click image for larger version

Name:	Untitled.png
Views:	1
Size:	23.2 KB
ID:	1399784



    Click image for larger version

Name:	tests2.png
Views:	1
Size:	17.8 KB
ID:	1399785





    What do I miss in the first model that bugs the Brant test? Should I worry about the perfectly predicted observations (removing them does not solve the problem)?

    Thank you very much for your answer,
    Best,
    Magda
    Attached Files
    Last edited by Magdalena Kizior; 29 Jun 2017, 10:13.

  • #2
    Your issues with testing a parallel model here are, I would argue, the least of your worries. With 68 observations, 5 response categories, and 4 predictors, I would have essentially no confidence in the estimates from any kind of categorical regression model. I think the general experience is that estimates will be substantially biased away from the null in this kind of situation.. Further, standard error estimates will be untrustworthy, and the distribution of test statistics will not likely match the asymptotic assumptions. My impression is that different parallelism tests can give different answers even in reasonable circumstances.

    Comment


    • #3
      Dear Mike,

      Thank you for your reply. I completely agree with you about the model, my question was purely theoretical about the same test yielding two completely different results with two different commands.

      Best,
      Magda

      Comment


      • #4
        I don't see the problem: the tests test different null hypotheses, so it is perfectly possible that you get different results. Granted, the hypotheses are related, but not enough to force the tests to correspond, especially in such problematic samples.
        ---------------------------------
        Maarten L. Buis
        University of Konstanz
        Department of history and sociology
        box 40
        78457 Konstanz
        Germany
        http://www.maartenbuis.nl
        ---------------------------------

        Comment

        Working...
        X