Dear all,
For the first time I did not manage to solve my problem thanks to the previous questions, so this is my big debut (hoping that it does not mean that I crossed the border of an extreme stupidity). I am using STATA 14.
I was supposed to do the diagnostics of the following ordered logit run on a sample of only 68 observations and 5 categories with a very uneven number of observations in the dependent variable:
As expected, the tests I performed (even though they cannot be viable given the size of the sample I agree) with the oparallel command suggest the violation of the parallel regression assumption.However, while performing Brant test alone to get the details (from the recommened here spost13_ado package), the overall result is absurd and does not correspond to the one obtained by oparallel.
After regrouping the categories of the dependent variable (three highest scores together), the two commands yield the same results (see below). Please note that the results of the binary logits in the details of the Brant test below correspond to the first two categories in the details above, which shows that this part worked as the two categories did not change.

What do I miss in the first model that bugs the Brant test? Should I worry about the perfectly predicted observations (removing them does not solve the problem)?
Thank you very much for your answer,
Best,
Magda
For the first time I did not manage to solve my problem thanks to the previous questions, so this is my big debut (hoping that it does not mean that I crossed the border of an extreme stupidity). I am using STATA 14.
I was supposed to do the diagnostics of the following ordered logit run on a sample of only 68 observations and 5 categories with a very uneven number of observations in the dependent variable:
As expected, the tests I performed (even though they cannot be viable given the size of the sample I agree) with the oparallel command suggest the violation of the parallel regression assumption.However, while performing Brant test alone to get the details (from the recommened here spost13_ado package), the overall result is absurd and does not correspond to the one obtained by oparallel.
After regrouping the categories of the dependent variable (three highest scores together), the two commands yield the same results (see below). Please note that the results of the binary logits in the details of the Brant test below correspond to the first two categories in the details above, which shows that this part worked as the two categories did not change.
What do I miss in the first model that bugs the Brant test? Should I worry about the perfectly predicted observations (removing them does not solve the problem)?
Thank you very much for your answer,
Best,
Magda
Comment