Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I truly appreciate your kind feedback. It was really helpful. by investment, I mean the dependent variable. I shall use the suggested command to see the effects. Thank you once again

    Comment


    • #17
      Sorry, just a quick question. In your first comment, you made few calculations. -0.483, -0.1006 & -0.083. Generally, in maths, it goes opposite; the smaller the bigger. so does that mean -0.083 has the greatest effect or does it mean least coefficient value with negative value so less negative effects if all three are significant???

      Comment


      • #18
        I apologise I missed the magnitude part in your last comment. I got it that its the magnitude that matters; the signs only refer to the direction positive or negative effects. Thank you so much. I extend infinite gratitude sir for explaining this matter in such a simple manner that its easy to understand. God bless you.

        Comment


        • #19
          Hi Clyde,

          I hope you are doing well. I appreciate if you can please help with interpretation of interaction terms:

          This is the model:

          R&D = Intercept + b1Cash + b2Ownership dummy + b3(c.Cash# i.Ownership dummy)

          Results are:
          R&D Coefficient WC-Robust std. err. z P>z [95% conf. interval]
          Cash -0.103 0.048 -2.150 0.032 -0.198 -0.009
          Ownership dummy =1 -0.069 0.035 -1.990 0.047 -0.138 -0.001
          c.cash # i. ownership dummy
          Cash*ownership when dummy =1 0.093 0.047 2.000 0.046 0.002 0.185
          _cons 0.184 0.079 2.340 0.019 0.030 0.338
          I ran margins command:


          Average marginal effects Number of obs = 2,829
          Model VCE: WC-Robust
          Expression: Linear prediction, predict()
          dy/dx wrt: chcash_ta

          dy/dx Delta-method std. err. z P>z [95% conf. interval]
          Cash
          Ownership dummy
          Dummy =0 -0.103 0.048 -2.150 0.032 -0.198 -0.009
          Dummy =1 -0.010 0.030 -0.350 0.727 -0.068 0.048
          So, the main effects of regression model shows that when ownership dummy is 0, an increase in cash means firm is investing less in R&D so a negative relationship is justified. Here, cash variable is change in cash during period t and t-1. Ownership dummy results when the value is 1 is also justified as these firms have a reluctance to invest in R&D due to the unique characteristics these firms possess (also justified by theory). For the interaction effect, the value is -0.010 for dummy =1 which is greater in value than -0.103 for dummy = 0 as the values are negative. This means that if ownership dummy 1 invests in R&D (though they underinvest), they use more cash reserves for R&D compared to dummy 0 firms which may use other sources of financing for these investments. Is that interpretation right?

          Comment


          • #20
            Hi Clyde,

            I hope you are doing well. I appreciate if you can please help with interpretation of interaction terms:

            This is the model:

            R&D = Intercept + b1Cash + b2Ownership dummy + b3(c.Cash# i.Ownership dummy)

            Results are:
            R&D Coefficient WC-Robust std. err. z P>z [95% conf. interval]
            Cash -0.103 0.048 -2.150 0.032 -0.198 -0.009
            Ownership dummy =1 -0.069 0.035 -1.990 0.047 -0.138 -0.001
            c.cash # i. ownership dummy
            Cash*ownership when dummy =1 0.093 0.047 2.000 0.046 0.002 0.185
            _cons 0.184 0.079 2.340 0.019 0.030 0.338
            I ran margins command:
            margins ownership dummy, dydx (cash)


            Average marginal effects Number of obs = 2,829
            Model VCE: WC-Robust
            Expression: Linear prediction, predict()
            dy/dx wrt: chcash_ta
            dy/dx Delta-method std. err. z P>z [95% conf. interval]
            Cash
            Ownership dummy
            Dummy =0 -0.103 0.048 -2.150 0.032 -0.198 -0.009
            Dummy =1 -0.010 0.030 -0.350 0.727 -0.068 0.048

            So, the main effects of regression model shows that when ownership dummy is 0, an increase in cash means firm is investing less in R&D so a negative relationship is justified. Here, cash variable is change in cash during period t and t-1. Ownership dummy results when the value is 1 is also justified as these firms have a reluctance to invest in R&D due to the unique characteristics these firms possess (also justified by theory). For the interaction effect, the marginal effect is -0.010 for dummy =1 which is greater in value than -0.103 for dummy = 0 as the values are negative. This means that if ownership dummy 1 invests in R&D (though they underinvest), they use more cash reserves for R&D compared to dummy 0 firms which may use other sources of financing for these investments. Is that interpretation right?
            Last edited by Zeenat Murtaza; 19 Jun 2023, 08:53.

            Comment


            • #21
              I can't really help you with the substantive interpretation as I do not have a background in finance and do not know if your interpretation of the variables themselves is correct.

              What we can say from a purely statistical interpretation perspective is that viewed as a function of Cash, R&D decreases at a rate of 0.103 per unit increase in Cash when ownership = 0, and at a rate of 0.01 per unit increase in Cash when ownership = 1. Qualitatively, R&D decreases less rapidly per unit increase in Cash when ownership = 1 than when ownership = 0.

              I hope that is helpful.

              Comment


              • #22
                Thank you Clyde. So, it is the magnitude that matters; signs reflect direction in interaction terms and not the magnitude of effect.

                Comment


                • #23
                  Clyde, I am slightly confused about the interaction terms: Can you please help with 3 different situations:

                  First instance:

                  Y = b1X + b2Y + b3(X*Y)

                  here, b1 & b2 coefficient is negative but b3 is positive.

                  Second instance:

                  Y = b1X + b2Y + b3(X*Y)

                  here, b1 & b2 coefficient is negative. Also, b3 interaction effect is negative.

                  Third instance,

                  Y = b1X + b2Y + b3(X*Y)

                  here, b1 is negative, b2 coefficient is positive and so is b3 interaction coefficient which is also positive.
                  Last edited by Zeenat Murtaza; 19 Jun 2023, 12:10.

                  Comment


                  • #24
                    All of the models you show in #23 are invalid: you can't have Y on both the left and right hand sides of the equations. I'll assume you meant Z instead of Y on the right hand side.

                    There is no need to distinguish the three cases, as there is nothing different about their interpretation. For simplicity, I will assume that the independent variable X is continuous and Z is dichotomous, similar to the situation in the previous post, though, again, there is nothing essentially different about this case from other combinations of continuous and categorical variables.

                    In any interaction model of this type, the key point is that the marginal effect of X on Y depends on the value of Z. Graphically, this means that a plot of the expected value of Y vs X will have two lines, one for Z = 0 and the other for Z = 1, and those lines are not parallel. In the model
                    Code:
                    Y = b0 + b1*X + b2*Z + b3*(X#Z)
                    b1 (regardless of its sign) is the marginal effect of X on Y, condtional on Z = 0. In graphical terms, it is the slope of the Y:X regression line for Z = 0.

                    The slope of the Y:X regression line for Z = 1 is more complicated: b1 + b3. In statistical terms, b1 + b3 is the marginal effect of X on Y when conditional on Z = 1.

                    You do not have to do these calculations by hand. In fact, you don't even really need to remember or understand these formulas, though I think it is better if you do. In Stata, if you have used factor variable notation in the regression, the -margins- command will calculate them for you:
                    Code:
                    regression_command Y i.Z##c.X
                    margins Z, dydx(X)
                    That's all there is to it. If you are unsure about what the marginal effects actually mean, it helps to follow the -margins- command with -marginsplot-. That way you can see the regression lines with your own eyes and it will be clear what is going on. You will see the regression lines, and you will see in which directions they are pointing.

                    Added: Sorry, you need to do a little more with -margins- to get the graphs that I referred to. After what I showed above, you can do:
                    Code:
                    summ X, meanonly
                    local low = r(min)
                    local high = r(max)
                    margins Z, at(X = (`low' `high'))
                    marginsplot, xdimension(X)
                    Last edited by Clyde Schechter; 19 Jun 2023, 12:41.

                    Comment


                    • #25
                      Thank you Clyde. It is very much clear now.

                      Comment


                      • #26
                        Clyde, can you please help clarify one more ambiguity? For plotting marginal effects, is linear regression necessary? I regressed my model using System GMM and instruments approach to resolve the issue of endogeneity and autocorrelation. While I plotted graphs to see the marginal effects, the effects for the reference or omitted category are exactly opposite. In simple linear regression model, it is a straight positive line while plotting marginal effects after regressing variables using GMM gives a declining slope. Can you please advise?

                        Comment


                        • #27
                          I can't give you useful advice about this. I have always been skeptical about instrumental variables approaches, largely because in my field, epidemiology, it is so seldom possible to even find a variable that could plausibly serve as an instrument. (I know that a random assignment is an instrument, theoretically even a perfect instrument. I'm not talking about that. I'm talking about analyses of observational study designs.) So I have not learned much about analysis with instrumental variables. All I can say is that if a properly done instrumental variables analysis gives opposite results to a plain regression, it suggests that the plain regression results are wrong due to some unresolved confounding. But, even if you showed all your work here, I would be in no position to judge whether your instrumental variables analysis was appropriate and correctly done, nor would I be able to suggest possible sources of unresolved confounding in your regression as the subject matter is well outside my domain of respectable knowledge (let alone expertise).

                          Hopefully somebody else who is following the thread is better positioned to advise you on this aspect of things and will chime in.

                          Comment

                          Working...
                          X