Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Moderation analysis - Interaction Term is significant, but the marginal effects are not.

    Hello,

    just in case my question is trivial, I am a novice at econometric and Stata. I did my research in order to answer this question by myself but I rather get more confused. So I would really appreciate your help!

    I run an intreg-model with an interaction term of two continuous variables. The regression output shows that the interaction term is significant at least with p = 0.058.
    While the main effect is negative the interaction term has a positive coefficient.
    In order to visualize the effects I calculated predictive margins and used the marginsplot. The predicted margins are all significant as well.
    According to this graph the moderator weakens the negative main effect and at high levels of the moderator the effect even becomes positive.

    Until know I assumend that having a significant interaction term is enough to prove support for my Hypothesis.
    However, I read something about a simple slope test to further verify.
    My first question is: Is a simple slope test really necessary ?

    In order to get the simple slope values I calculated marginal effects.


    Code:
      margins, dydx(IV) at (MODERATOR = (-.79112649 .03806308 .86725265 1.6964421 2.5256317 3.3548212))
    
    Average marginal effects                        Number of obs     =        907
    Model VCE    : OIM
    
    Expression   : Linear prediction, predict()
    dy/dx w.r.t. : MP
    
    1._at        : MODERATOR =   -.7911265
    
    2._at        : MODERATOR =    .0380631
    
    3._at        : MODERATOR  =    .8672526
    
    4._at        : MODERATOR  =    1.696442
    
    5._at        : MODERATOR  =    2.525632
    
    6._at        : MODERATOR  =    3.354821
    
    ------------------------------------------------------------------------------
                 |            Delta-method
                 |      dy/dx   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
    IV         |
             _at |
              1  |  -.3028635   .1648471    -1.84   0.066    -.6259579    .0202309
              2  |  -.1928102   .1186903    -1.62   0.104    -.4254389    .0398185
              3  |  -.0827569   .0879611    -0.94   0.347    -.2551574    .0896436
              4  |   .0272964   .0901326     0.30   0.762    -.1493602    .2039529
              5  |   .1373496    .123481     1.11   0.266    -.1046687     .379368
              6  |   .2474029   .1706112     1.45   0.147     -.086989    .5817948
    ------------------------------------------------------------------------------
    
    .
    I standardized the variables. Thus -.79112649 actually represents the value zero for the moderator.
    So according to these results there is only a significant marginal effect if the moderator is zero.

    What does this mean for my interpretation of the interaction effect and for the explanatory power of my figure showing the predictive margins?


    I am very grateful for any help!
    I hope I have provided all necessary information.

    Many thanks, Sara.

  • #2
    I think you need to tell a little more detail about what you have done. It sounds like you have chosen methods that maximize the obscurity of your results, so it is not surprising you are confused here. Without seeing exactly what you mean when you say you "standardized" the variables, it is hard to comment, but it is possible that if you did this incorrectly (which many people do) you do not even have an interaction model here. (For more specific advice, show the actual code you used when posting back.) In general standardizing variables is a bad idea, doubly so if you plan an analysis with interaction terms. Maybe there is a compelling reason to do it in your situation, but otherwise, you really should avoid it.

    You then confuse yourself further by focusing on the statistical significance of your findings. he American Statistical Association has recommended that the concept of statistical significance be abandoned. See https://www.tandfonline.com/doi/full...5.2019.1583913 for the "executive summary" and
    https://www.tandfonline.com/toc/utas20/73/sup1 for all 43 supporting articles. Or https://www.nature.com/articles/d41586-019-00857-9 for the tl;dr. Even if you want to resist this advice, in the context of interaction variables, statistical significance becomes even more obscure.


    Whenever you do an interaction between two continuous variables, the marginal effect of the IV is a linear function of the moderator, and you can see that illustrated in your -margins- output. With each step up in your moderator, the marginal effect of IV increases. In fact, given the linear relationship between the moderator and the marginal effect of IV, from an algebraic perspective there is always some value of the moderator for which that marginal effect is zero, and so, inevitably, in some range around that value of the moderator, the marginal effect of IV will be "not significant." Again, you can see this in your own output, somewhere between the third and fourth values of the moderator specified in your -at()- option, the marginal effect of IV crosses zero. You could calculate the exact value of the moderator where this happens from the regression coefficients, it's when moderator = -_b[IV]/_b[IV#moderator]. So, unless the interaction term coefficient is actually zero, there is always some range of moderator values where the marginal effects are "not significant" and outside that range they are "significant." This fact itself is just an algebraic phenomenon that means nothing in the real world. (Of course, sometimes the range of moderator values for which the IV marginal effects are "not significant" lies beyond the range of moderator values actually encountered in the data, so this phenomenon is not always as obvious as it is n your case.)

    Anyway, to understand whether you have a meaningful degree of effect moderation here, even if you remain in the statistical significance paradigm, is not whether marginal effects at particular values of the moderator are significant or not, but rather whether the marginal effects at important values of the moderator differ (significantly, if you wish) from each other.

    So you need to think about what your research goal actually is. If you are actually about testing a null hypothesis about the existence of a moderating effect, then ignore the -margins- output and just look at the significance of the interaction coefficient in the regression output. But most null hypotheses are just straw men and when doing these models we are usually more interested in estimating effects. If that's what your goal is, then ignore the p-values and focus on the marginal effects and the confidence intervals around those. And look at those confidence intervals not just to see whether they contain zero (which is just statistical significance in disguise) but what the limits of those confidence intervals imply about the real-world meaningfulness of the marginal effects.

    Comment


    • #3
      Hello Clyde,

      thanks a lot for this informative answer!
      At first I wanted to do without standardizing because I got the impression that in the literature the opinion is rather mixed towards standardizing. However, my supervisor from the university has advised me explicity to standardize those variables that I use for the interaction term.
      In order to standardize the variables I did the following:
      egen IV_std=std(IV)
      egen Moderator_std = std(Moderator)
      I then included the interaction term as follows; IV_std#Moderator_std
      Of course I included IV_std and Moderator_std as well.
      I hope this is right?


      Simply put I want to prove the following Hypothesis:
      H1: There is a positive association between IV and DV.
      H2: This association is positively moderated by the Moderator.

      First I run the regression without the Moderator and the interaction term in order to test H1.
      The results show a negative relationship between IV and DV which is positively moderated and looking at the margin plot after calculating predictive margins one can see that the relationship may become positive at high levels of the moderator.
      Let's assume I want to show the following:
      There is a positve moderation effect, which weakens the negative relationship, and at a high level of the moderator there may be a positve relationship.
      In this case, the significance of the interaction coefficient and the predictive margins are enough, right? So there is no need for marginal effects?

      To me it seems that in my course of study we still follow the statistical significance paradigm.

      Through my research I increasingly get the impression that current practice and what is scientifically correct often differ from each other. This is a bit confusing for a newbie like me.
      So I am very grateful for the help I get here.

      Comment


      • #4
        In order to standardize the variables I did the following:
        egen IV_std=std(IV)
        egen Moderator_std = std(Moderator)
        I then included the interaction term as follows; IV_std#Moderator_std
        Of course I included IV_std and Moderator_std as well.
        I hope this is right?

        Yes, that's right.

        Let's assume I want to show the following:
        There is a positve moderation effect, which weakens the negative relationship, and at a high level of the moderator there may be a positve relationship.
        In this case, the significance of the interaction coefficient and the predictive margins are enough, right? So there is no need for marginal effects?
        No, that's not correct. The predictive margins tell you about the levels of DV at high values of the moderator (and specified or average values of IV depending on the specifics of your code). But those aren't relevant to whether the DV:IV relationship is positive at those values. If you want to show that at a high level of the moderator there may be a positive relationship, then the marginal effects at high values of the moderator are precisely what you are interested in: they need to be positive.

        If you look at your -margins- output in #1, you basically have it. As the moderator increases, so do the marginal effects. Look at the last value of moderator, 3.35... The marginal effect is about 0.24, which is positive, and in terms of just its magnitude, it's the second largest in the entire batch. The confidence interval runs from -.09 to +.58. It is mostly in positive territory, and the negative lower bound is just barely negative. Had you chosen a slightly higher value of the moderator, you no doubt would have had a confidence interval entirely in positive territory. I don't know how you chose those particular 6 values of the moderator. If that last one is really as large as one can realistically expect to see it, then the conclusion has to be that the data do not permit a precise enough estimation of the marginal effect of IV at any level of the moderator to make clear-cut claims about the direction of that relationship. But if larger values of the moderator actually occur in real life, then you could justifiably add one to the list and you would almost surely find a marginal effect of IV whose confidence interval lies entirely in positive territory. Its confidence interval will, no doubt, still be wide: the data just aren't sharp enough to really give tight estimates of these effects. But at least you will have clarity on the direction of the IV:DV relationship at that point.

        Through my research I increasingly get the impression that current practice and what is scientifically correct often differ from each other. This is a bit confusing for a newbie like me.
        That is, indeed, and unfortunately, often the case. And it takes a long time and a lot of concerted effort to root out bad practices because they are passed down from generation to generation. Teachers tend to teach what they were taught. As a newcomer in your field, you should not make these issues the hill you choose to die on. But you should make it your business to learn the better practices, even if they are not presently followed in your milieu. Over time, things likely will change, and you will be prepared to switch to using better practices when that happens. And remember these experiences. Some day, perhaps, you will be a senior investigator and a mentor in your field. If and when that happens, remain open to yet newer developments and encourage those you mentor to adopt them. And use the relative security that comes with seniority to advocate for appropriate changes in methodology when you write papers and make presentations: push back against those who continue to propagate discredited methods.

        Comment


        • #5
          Thank you very much for the detailed answer. That helped me a lot!

          Comment

          Working...
          X