You are not logged in. You can browse but not post. Login or Register by clicking 'Login or Register' at the top-right of this page. For more information on Statalist, see the FAQ.
Medel:
both newbies and experienced Stata users are recommended by FAQ to post what they typed and what Stata gave them back (via CODE delimiters) to increase their chances of getting helpful replies.
The same source warns about attachments. Thanks.
Medel:
whereas -_hatsq- has no explanatory power, there may still be an omitted linear predctor in your regression equation.
This is the only relevant issue, in that both -hettest- and -vif- (even though for good reasons - see https://www.amazon.com/Course-Econom.../dp/0674175441 - this latter post regression test has recently lost a bit of its popularity) look OK.
Two asides to what above:
- for the future, please use CODE delimiters instead that resulst screenshots to explain what's the matter with your data;
- please call me Carlo, just like all on (and many more off) the list do.
Carlo:
Thanks for the tips. I will learn how to use CODE delimiters.
Another question, if i remove one variable, the p-value from ovtest becomes insignificant. Is it because of collinearity?
The things is, i have two options.
1) remove a variable, p-value is insignificant
2) fail ovtest, but continue to explain that my model needs additional variables to increase its explanatory value
Which is better? Is the latter a pure violation?
Many thanks!
Medel:
the predictors included in right-hand side of the regression equation sholuld give a fair and true view of the underlying data generating process: hence, loking for the "right p-value" (whatever that may mean) it is ok conditional on including all the predictors that are considered relevant in your research field, which remains te main issue.
On a slightly different note, I'm wondering about the fact that the model, in spite of just 60 observations for 6 predictors, presents significant coefficients for half of the predictors. Add to it that the R-squared is impressively high, all this for a model that, potentially, would "benefit" if adding (theoretically) ommited variables. I don't know what each predictor represents, and surely the field is not the one I'm more acquainted with, but, well, I'm wondering whether this is not sort of "too good to be true" situation.
Comment