Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • change in sign of relationship in interaction effect

    i am conducting an experiment where there are two independent variables in a panel data set of 12 years. the main effect of the individual independent variables on the dependent variables is positive and statistically significant. but when i include the interaction term in the model the interaction effect is negative relation which is not statistically significant. i am using the c (dot) operative for interaction effect. is there any other possible alternative?
    Click image for larger version

Name:	Untitled.png
Views:	1
Size:	162.2 KB
ID:	1737025

  • #2
    [quote]is there any other possible alternative?[/quote[
    It is not science to go "shopping" for different models until we find one whose results match our preferences. It is, if anything, closer to scientific misconduct. If there is a body of good-quality prior research that concludes that the interaction between those two variables is positive in models of this nature, then your current task is to step away from the keyboard and start trying to understand what about your study design, data gathering methods, and data management might have led you to reach a different conclusion.

    But I hasten to add that here on Statalist, most situations like yours actually arise because the person raising the question doesn't understand how to interpret the results they have. If you are thinking that this interaction term's negative sign means that the overall effect of Int_VAIC_w or Int_GDS_w on ENTERPRISE_VALUE is negative, this is simply not the case and you need to learn how to interpret interaction terms correctly. In this interaction model, unlike the model without interaction, the coefficients of Int_VAIC and Int_GDS are not the marginal effects of those variables overall. Rather they are the marginal effects of those variables conditional on the other variable being 0. I have no idea what these variables are, and you have not shown any example data that might shed light on that, but if 0 is not even an observed (or even possible) value of one of those variables, then a marginal effect conditional on it being 0 is clearly not even of any real-world interest--it's just a mathematical abstraction in the model.

    I think the best way to understand how these interactions work is to graph them. First, pick a set of representative values for each of the variables. As I know nothing about these variables, to demonstrate this approach, I will just use 0, 1, 2, 3, 4 for Int_VAIC_w, and 50, 60, 70, 80 for Int_GDS_w. I would re-run the same regression and then run the following commands (but using realistic values for the variables) and study each of the graphs:

    Code:
    // THE EXPECTED VALUES OF THE OUTCOME AT VARIOUS VALUES
    margins, at(Int_VAIC_W = (0 1 2 3 4) intGDS_w = (50 60 70 80))
    marginsplot
    
    // THE MARGINAL EFFECTS OF EACH VARIABLE ON THE OUTCOME
    // AT VARIOUS VALUES OF THE OTHER
    margins, dydx(Int_VAIC_W) at(intGDS_w = (50 60 70 80))
    marginsplot
    
    margins, dydx(intGDS_w) at(Int_VAIC_W = (0 1 2 3 4))
    marginsplot
    
    // THE AVERAGE MARGINAL EFFECTS OF EACH VARIABLE ON THE OUTCOME
    // N.B. NO GRAPH HERE, JUST A SHORT TABLE OF AVERAGE MARGINAL EFFECTS
    margins, dydx(Int_VAIC_W intGDS_w)
    You may find that your results are in fact consistent with what others have found.

    Comment

    Working...
    X