No announcement yet.
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to interpret quadratic interaction in linear regression?

    In my linear regression, I am trying to see if there's an interaction between a categorical variable with more than 2 levels (CatVar) and another independent continuous variable (ConVar) . The problem is that the continuous variable has a quadratic relationship with the dependent variable.

    When I check for interaction by
    regress Outcome i.CatVar##c.ConVar##c.ConVar
    "testparm" reveals that neither quadratic interaction (CatVar#c.ConVar#c.ConVar) or linear interaction term (CatVar#c.ConVar) is significant.

    However, when I put the interaction term as i.CatVar##c.ConVar##c.ConVar in the whole regression model together with other variables, "testparm" reveals that only the quadratic interaction term (CatVar#c.ConVar#c.ConVar) is significant, while the linear interaction term (CatVar#c.ConVar) is non-signficant.

    How can I interpret this? Is there an interaction effect at all? Or do I conclude that there is no interaction?

  • #2
    welcome to this forum.
    I think that you should make things easier, as what you're after is really difficult to explain.
    I would focus on quadratic relationship only.
    Besides, you do not tell us whether a post estimation routine (-estat ovtest-; -estat hettest- at least) has been performed, nor how many observations your dataset is composed of.
    Eventually, posting what you typed and what Stata gave you back via CODE delimiters is more helpful than tons of words aiming at describing what's the matter with your data. Thanks.
    Kind regards,
    (Stata 15.1 SE)


    • #3
      In addition to Carlo's helpful answer, you seem to have a simpler problem. It is quite possible that adding variables will make variables from a partial model become statistically significant (often because the added variables reduce the error variance) or significant variables become insignificant (often because added variables are correlated with the original variables). That is just how it is.

      So, which model do you interpret? I personally would interpret the full model on the basis that your estimation of the full model [assuming some added variables are statistically significant] demonstrates the partial model suffers from omitted variables bias.


      • #4
        The description in #1 is incomplete and it is not possible to really understand what you have done, so it is hard to give specific advice. For example, it is unclear how you used -testparm-, or just what models you have looked at.

        If you would like more specific advice, I suggest you post back showing the exact and complete output of your regressions (along with the commands that led to the outputs). To make the output readable, please follow the advice and instructions in Forum FAQ #12 and use code delimiters.

        I will add one more comment: in a quadratic model, it is not useful to run tests that include quadratic terms but not the linear ones, nor vice versa. Only tests that include both linear and quadratic terms simultaneously are valid in such models.