Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Different regression methods to proof

    Hello,

    I have a question regarding different regression methods to proof my assumption. I want to know how ICT adoption influences the innovativeness of a firm. I have paneldata and two waves of observations. My simple OLS model is
    Code:
     reg Inno dum_ICT firmage l10 c3 b2b i.a4a i.a1
    where Inno is the innovation index (values from 1 to 4), dum_ICT is the ICT adoption dummy, l10 is a dummy if the firm offered formal training programmes last year, c3 is a dummy if the firm applied for a electrical connection last year, b2b is the percentage of the firm owned by foreigners, a4a is a dummy for the industry sector and a1 is a dummy for the country. In the model above, ICT is highly significant and it turns out that if a firm generally adopted ICT their innovativeness increases. Here you can see the outcome without all the dummies for the industry sector and the countries.

    Code:
    . regress Inno dum_ICT firmage l10 c3 b2b i.a4a i.a1
    
          Source |       SS           df       MS      Number of obs   =     5,167
    -------------+----------------------------------   F(53, 5113)     =     56.83
           Model |  3378.29391        53  63.7413945   Prob > F        =    0.0000
        Residual |  5735.11813     5,113   1.1216738   R-squared       =    0.3707
    -------------+----------------------------------   Adj R-squared   =    0.3642
           Total |  9113.41204     5,166  1.76411383   Root MSE        =    1.0591
    
    ------------------------------------------------------------------------------
            Inno |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
         dum_ICT |   .2775897    .046647     5.95   0.000     .1861417    .3690376
         firmage |   -.000695   .0010344    -0.67   0.502    -.0027229    .0013329
             l10 |    .447571    .033803    13.24   0.000     .3813027    .5138393
              c3 |   .1906314   .0429038     4.44   0.000     .1065216    .2747412
             b2b |   .0007365   .0005533     1.33   0.183    -.0003482    .0018211
                 |
    But now I want to include different types of regressions with the same model in order to convince the reader that my hypothesis, that firms with higher ICT adoption perform better, is true. So I want further proof.
    Therefore I adapted the model obove in order to see if firms which had zero ICT in year 1 but adapted ICT between year one and year 2 perform better:

    Code:
    .  xtset panelid year
           panel variable:  panelid (strongly balanced)
            time variable:  year, 1 to 2
                    delta:  1 unit
    
    .  regress Inno dum_ICT firmage l10 c3 b2b i.a4a i.a1 if year==2 & L.dum_ICT==0
    
          Source |       SS           df       MS      Number of obs   =     3,103
    -------------+----------------------------------   F(52, 3050)     =     42.74
           Model |  2659.28191        52  51.1400367   Prob > F        =    0.0000
        Residual |  3649.46382     3,050  1.19654552   R-squared       =    0.4215
    -------------+----------------------------------   Adj R-squared   =    0.4117
           Total |  6308.74573     3,102  2.03376716   Root MSE        =    1.0939
    
    ------------------------------------------------------------------------------
            Inno |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
         dum_ICT |   .2845822   .0697638     4.08   0.000     .1477933     .421371
         firmage |  -.0010243   .0015315    -0.67   0.504    -.0040271    .0019784
             l10 |   .5705493   .0458074    12.46   0.000     .4807328    .6603659
              c3 |   .1032498   .0581832     1.77   0.076    -.0108325    .2173322
             b2b |    .000677   .0007668     0.88   0.377    -.0008264    .0021804
                 |
             a4a |
              2  |  -.2885407   .2943741    -0.98   0.327    -.8657323    .2886509
              3  |  -.2833497   .3388527    -0.84   0.403    -.9477525    .3810531
              4  |   .0474887    .355741     0.13   0.894    -.6500278    .7450051
              5  |  -.4421766   .2829037    -1.56   0.118    -.9968779    .1125246
              6  |  -.4162328   .2702835    -1.54   0.124    -.9461891    .1137235
              7  |  -.0908338   .3143731    -0.29   0.773    -.7072384    .5255707
              8  |  -.3757523   .2803625    -1.34   0.180    -.9254708    .1739663
              9  |  -.5647735   .2802044    -2.02   0.044    -1.114182    -.015365
             10  |  -.5570642   .4010146    -1.39   0.165     -1.34335     .229222
             11  |   .0529539   .4912183     0.11   0.914    -.9101984    1.016106
             12  |  -.1009795   .3189997    -0.32   0.752    -.7264557    .5244966
             13  |   -.197411   .2702716    -0.73   0.465    -.7273438    .3325219
             14  |  -.6717909   .4492206    -1.50   0.135    -1.552597    .2090148
             15  |  -.1987418   .3154601    -0.63   0.529    -.8172777    .4197941
             16  |    .290117   .3185079     0.91   0.362    -.3343948    .9146287
             17  |   -1.44027   .8197693    -1.76   0.079    -3.047626    .1670859
             18  |   .5777043   .8202959     0.70   0.481    -1.030684    2.186093
             19  |  -.3818752   .2680905    -1.42   0.154    -.9075315    .1437811
             20  |  -.2772761   .2735206    -1.01   0.311    -.8135795    .2590272
             21  |  -.6216942   .3072893    -2.02   0.043    -1.224209   -.0191792
             23  |  -.4837853   .3432089    -1.41   0.159    -1.156729     .189159
             24  |  -.4641845   .2853611    -1.63   0.104    -1.023704     .095335
             25  |   -.971511   .3362687    -2.89   0.004    -1.630847   -.3121749
                 |
              a1 |
             11  |   .1634681   .1675756     0.98   0.329    -.1651045    .4920408
             12  |  -.3999711   .1618933    -2.47   0.014     -.717402   -.0825401
             13  |  -.1681401   .2141821    -0.79   0.432     -.588096    .2518158
             14  |  -.2134868   .2321303    -0.92   0.358    -.6686344    .2416608
             15  |   .3120863   .1438212     2.17   0.030       .03009    .5940826
             16  |  -.2574673   .1790772    -1.44   0.151    -.6085916     .093657
             17  |  -.1769258   .1475234    -1.20   0.231    -.4661812    .1123295
             18  |   .4848021   .1341442     3.61   0.000       .22178    .7478242
             19  |   1.083772   .1396287     7.76   0.000     .8099967    1.357548
             20  |   1.801743   .2438499     7.39   0.000     1.323616     2.27987
             21  |   1.915563     .14788    12.95   0.000     1.625609    2.205518
             22  |  -.0225719   .1868979    -0.12   0.904    -.3890305    .3438867
             23  |   1.349221   .1709095     7.89   0.000     1.014112    1.684331
             25  |  -.0714195   .1505803    -0.47   0.635    -.3666687    .2238297
             27  |   1.790655   .1183697    15.13   0.000     1.558562    2.022747
             28  |   1.999689   .1823233    10.97   0.000       1.6422    2.357178
             29  |   .9985773   .1282099     7.79   0.000     .7471907    1.249964
             30  |   .6118559   .1740438     3.52   0.000     .2706009     .953111
             31  |  -.4342055   .1394508    -3.11   0.002    -.7076325   -.1607785
             32  |   2.301774   .1525156    15.09   0.000      2.00273    2.600817
             33  |   .7592134   .2025517     3.75   0.000     .3620618    1.156365
             34  |   1.854734   .1338328    13.86   0.000     1.592322    2.117146
             35  |   1.692545   .1409749    12.01   0.000      1.41613    1.968961
             36  |   .7192039   .1578082     4.56   0.000     .4097826    1.028625
                 |
           _cons |   .2699117    .281987     0.96   0.339     -.282992    .8228155
    ------------------------------------------------------------------------------
    So it turns out yes, they do.

    But I want further proof so I include Treatment effects and here a propensity score model (logistic).
    Outcome variable: Inno
    Treatment variable: dum_ICT
    Treatment independent: firmage l10 c3 b2b

    Code:
    . teffects psmatch (Inno) (dum_ICT firmage l10 c3 b2b)
    
    Treatment-effects estimation                   Number of obs      =      6,460
    Estimator      : propensity-score matching     Matches: requested =          1
    Outcome model  : matching                                     min =          1
    Treatment model: logit                                        max =        206
    ------------------------------------------------------------------------------
                 |              AI Robust
            Inno |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    -------------+----------------------------------------------------------------
    ATE          |
         dum_ICT |
       (1 vs 0)  |   .3910807   .0534788     7.31   0.000     .2862642    .4958973
    ------------------------------------------------------------------------------


    Also that model shows that ICT adoption is significant and if a firm adopts ICT their innovativeness increases. My question now is, what else would you suggest to do in order to proof my assumption? Would you suggest to use another estimation technique? I also think about interaction terms in order to see why ICT adoption positively affects Innovation. Therefore I am curious about interesting interactions with ICT.
    By the way, I will do all the regressions I showed above for the dependent variables:

    Capacity Utilization
    Export Rate
    Gross Profit Margin
    Labor Productivity and
    Cost of Material Ratio

    in order to show how ICT adoption affects overall firm performance.

    Thank you very much.


    Kind regards
    Dominik
    Last edited by Dominik Miksch; 23 May 2019, 04:14.

  • #2
    I would really appreciate if someone can help me.

    Kind regards
    Dominik

    Comment


    • #3
      Dominik:
      first off, why considering -regress- as your first choice if you actually have panel data?
      At the top of that, you run an OLS with no clustered standard errors; it tells Stat that all your observations are independent, whereas, due to the panel structure of your data, it's not the case.
      Morerover, you do not tell us whether any postestimation analysis has been carried out on your OLS in order to check if its specification fits your data.
      It is also redundant to -xtset- your data and then go OLS.
      Eventually, presenting many regression models to prove the same assumption is less rewarding that performing a methodologically correct model that gives a fair and true view of the data generating process underlying your data: see the literature in your research field on that.
      Kind regards,
      Carlo
      (Stata 19.0)

      Comment


      • #4
        I will agree with almost everything Carlo wrote, and certainly with all the statistical advice.

        The xtset was done at my advice in a previous topic to make it easy to use the value of dum_ICT in wave 1 to construct the if clause in the OLS regression. I did not review the advisability of the modeling, which did not mention the first model including both waves, beyond pointing out in a second post that dum_ICT should at a minimum have been included in the model as i.dum_ICT, and perhaps that dum_ICT need not have been created at all, but just included (in the model and in the if clause) using factor variable notation applied to some other categorical variable.

        The previous topic is at

        https://www.statalist.org/forums/for...if-and-command

        Comment


        • #5
          Carlo Lazzaro Thank you very much for your answer. Whould you rather argue with the propensity score model first and afterwards the xtset command and the regress command? Or can you recommend other, more suitable regressions to me?

          Regarding the standard errors, can I use the
          Code:
          , robust
          command in order to avoid telling Stata that all my observations are independent?
          Regarding the postestimation analysis: I indeed havn't done it so far but also William Lisowski gave me that hint, thank you for that. I used the margin command and Stata gave me the following:

          Code:
          . margins
          
          Predictive margins                              Number of obs     =      3,103
          Model VCE    : OLS
          
          Expression   : Linear prediction, predict()
          
          ------------------------------------------------------------------------------
                       |            Delta-method
                       |     Margin   Std. Err.      t    P>|t|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                 _cons |   1.109571   .0196369    56.50   0.000     1.071068    1.148074
          ------------------------------------------------------------------------------
          That means my average predicted value is approximately 1.11?
          If I hold the variable c3 constant at 1 (as it is a dummy) Stata tells me that if a firm applied for a electrical connection last year, the degree of innovation is higher whereas if they don't it is lower?

          Code:
          . margins, at (c3=(0 1))
          
          Predictive margins                              Number of obs     =      3,103
          Model VCE    : OLS
          
          Expression   : Linear prediction, predict()
          
          1._at        : c3              =           0
          
          2._at        : c3              =           1
          
          ------------------------------------------------------------------------------
                       |            Delta-method
                       |     Margin   Std. Err.      t    P>|t|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                   _at |
                    1  |   1.094165   .0214704    50.96   0.000     1.052067    1.136263
                    2  |   1.197415   .0532544    22.48   0.000     1.092997    1.301833
          ------------------------------------------------------------------------------
          If I compute the margins for the percentage of foreign ownership from 0 to 100% in 10% intervalls I get the following:

          Code:
          margins, at(b2b=(0(10)100))
          
          Predictive margins                              Number of obs     =      3,103
          Model VCE    : OLS
          
          Expression   : Linear prediction, predict()
          
          1._at        : b2b             =           0
          
          2._at        : b2b             =          10
          
          3._at        : b2b             =          20
          
          4._at        : b2b             =          30
          
          5._at        : b2b             =          40
          
          6._at        : b2b             =          50
          
          7._at        : b2b             =          60
          
          8._at        : b2b             =          70
          
          9._at        : b2b             =          80
          
          10._at       : b2b             =          90
          
          11._at       : b2b             =         100
          
          ------------------------------------------------------------------------------
                       |            Delta-method
                       |     Margin   Std. Err.      t    P>|t|     [95% Conf. Interval]
          -------------+----------------------------------------------------------------
                   _at |
                    1  |     1.1028   .0210815    52.31   0.000     1.061464    1.144135
                    2  |    1.10957   .0196369    56.50   0.000     1.071067    1.148073
                    3  |    1.11634   .0210801    52.96   0.000     1.075007    1.157672
                    4  |    1.12311   .0249142    45.08   0.000     1.074259     1.17196
                    5  |    1.12988   .0302431    37.36   0.000     1.070581    1.189179
                    6  |    1.13665   .0364165    31.21   0.000     1.065247    1.208053
                    7  |    1.14342   .0430727    26.55   0.000     1.058965    1.227874
                    8  |    1.15019   .0500194    22.99   0.000     1.052115    1.248265
                    9  |    1.15696   .0571507    20.24   0.000     1.044902    1.269018
                   10  |    1.16373   .0644054    18.07   0.000     1.037448    1.290012
                   11  |     1.1705    .071746    16.31   0.000     1.029825    1.311176
          ------------------------------------------------------------------------------
          So also here I can see that if the percentage of foreign ownership increases, the innovativeness increases.

          But whould you suggest to add any further proof or tests?

          Kind regards
          Dominik
          Last edited by Dominik Miksch; 24 May 2019, 02:04.

          Comment


          • #6
            Hello Statalist community,

            can you help me please with the interpretation of the following postestimation command(margins)?

            Code:
            . regress Inno c.ICT_percentage##c.ICT_percentage firmage l1 c3 c6 i.a4a i.a1 if year==2 & L.ICT_percentage <=20
            
                  Source |       SS           df       MS      Number of obs   =     1,370
            -------------+----------------------------------   F(53, 1316)     =     29.31
                   Model |  1602.20862        53  30.2303513   Prob > F        =    0.0000
                Residual |  1357.48481     1,316  1.03152341   R-squared       =    0.5413
            -------------+----------------------------------   Adj R-squared   =    0.5229
                   Total |  2959.69343     1,369  2.16193823   Root MSE        =    1.0156
            
            ---------------------------------------------------------------------------------------------------
                                         Inno |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
            ----------------------------------+----------------------------------------------------------------
                               ICT_percentage |   .0454914   .0081069     5.61   0.000     .0295875    .0613952
                                              |
            c.ICT_percentage#c.ICT_percentage |  -.0005593   .0001138    -4.92   0.000    -.0007825   -.0003361
                                              |
                                      firmage |  -.0018753   .0018048    -1.04   0.299    -.0054159    .0016654
                                           l1 |   .0000724   .0000664     1.09   0.276    -.0000578    .0002026
                                           c3 |   .1530584   .0860939     1.78   0.076    -.0158379    .3219548
                                           c6 |    .212252   .0782352     2.71   0.007     .0587727    .3657313
                                              |
                                          a4a |
                                           2  |  -.2159865   .3155645    -0.68   0.494    -.8350509    .4030778
                                           3  |  -.1121895   .3594475    -0.31   0.755    -.8173421    .5929631
                                           4  |   .1377966    .366335     0.38   0.707    -.5808678     .856461
                                           5  |   -.369737    .312782    -1.18   0.237    -.9833428    .2438689
                                           6  |  -.1021984   .2997502    -0.34   0.733    -.6902388     .485842
                                           7  |   .3783254   .3764894     1.00   0.315    -.3602595     1.11691
                                           8  |  -.2286283   .3232605    -0.71   0.480    -.8627905     .405534
                                           9  |  -.5167653   .3201113    -1.61   0.107     -1.14475    .1112189
                                          10  |  -.5089456   .4209658    -1.21   0.227    -1.334783    .3168918
                                          11  |  -.0188958   .4809339    -0.04   0.969    -.9623767    .9245851
                                          12  |  -.0364887   .3448283    -0.11   0.916    -.7129618    .6399844
                                          13  |  -.2166198   .3018173    -0.72   0.473    -.8087154    .3754758
                                          14  |  -.7490697   .4605976    -1.63   0.104    -1.652656    .1545161
                                          15  |   .0098462   .3360175     0.03   0.977    -.6493422    .6690346
                                          16  |   .0345462   .3457821     0.10   0.920    -.6437982    .7128906
                                          17  |  -1.535198   .7771162    -1.98   0.048     -3.05972   -.0106756
                                          18  |   .7230056   .7772562     0.93   0.352    -.8017909    2.247802
                                          19  |    .003666   .3002322     0.01   0.990      -.58532     .592652
                                          20  |  -.2453396   .3100887    -0.79   0.429    -.8536617    .3629825
                                          21  |  -.6512664   .3425804    -1.90   0.058     -1.32333     .020797
                                          23  |  -.2782228   .4207066    -0.66   0.509    -1.103552    .5471062
                                          24  |  -.3946428    .312745    -1.26   0.207    -1.008176    .2188905
                                          25  |  -.8228146   .4210755    -1.95   0.051    -1.648867    .0032378
                                              |
                                           a1 |
                                          11  |   .1831849   .2303103     0.80   0.427    -.2686305    .6350004
                                          12  |  -.2348465   .2086304    -1.13   0.261    -.6441311     .174438
                                          13  |  -.0641707   .2316078    -0.28   0.782    -.5185315      .39019
                                          14  |  -.0637736   .3177197    -0.20   0.841    -.6870661    .5595188
                                          15  |    .446383   .2105295     2.12   0.034      .033373    .8593931
                                          16  |  -.0317801     .26037    -0.12   0.903    -.5425657    .4790056
                                          17  |  -.0240553    .235726    -0.10   0.919    -.4864951    .4383844
                                          18  |   .9233698   .1894059     4.88   0.000     .5517993     1.29494
                                          19  |   1.171866   .1880565     6.23   0.000     .8029431     1.54079
                                          20  |   2.916206   .3526687     8.27   0.000     2.224351     3.60806
                                          21  |   1.952961   .1924001    10.15   0.000     1.575517    2.330405
                                          22  |   .1510064   .2610799     0.58   0.563    -.3611718    .6631847
                                          23  |   1.865506   .2886291     6.46   0.000     1.299282    2.431729
                                          25  |  -.0319352   .2357815    -0.14   0.892    -.4944839    .4306134
                                          27  |   2.475708   .2012857    12.30   0.000     2.080833    2.870584
                                          28  |   2.547968   .2713736     9.39   0.000     2.015596     3.08034
                                          29  |   1.363376   .2194429     6.21   0.000     .9328798    1.793872
                                          30  |   .8566836   .2694809     3.18   0.002     .3280245    1.385343
                                          31  |  -.1555558   .1806095    -0.86   0.389    -.5098698    .1987582
                                          32  |   2.498109   .2081875    12.00   0.000     2.089694    2.906525
                                          33  |   .8483156   .2646769     3.21   0.001     .3290809     1.36755
                                          34  |   2.556525   .1998441    12.79   0.000     2.164477    2.948573
                                          35  |   1.970318   .1993008     9.89   0.000     1.579336      2.3613
                                          36  |   .8531095   .2026048     4.21   0.000     .4556458    1.250573
                                              |
                                        _cons |  -.4396663   .3421566    -1.28   0.199    -1.110898    .2315656
            ---------------------------------------------------------------------------------------------------
            
            .
            end of do-file
            
            . do "C:\Users\domin\AppData\Local\Temp\STD18f4_000000.tmp"
            
            . margins, at(ICT=(0(10)100))
            
            Predictive margins                              Number of obs     =      1,370
            Model VCE    : OLS
            
            Expression   : Linear prediction, predict()
            
            1._at        : ICT_percen~e    =           0
            
            2._at        : ICT_percen~e    =          10
            
            3._at        : ICT_percen~e    =          20
            
            4._at        : ICT_percen~e    =          30
            
            5._at        : ICT_percen~e    =          40
            
            6._at        : ICT_percen~e    =          50
            
            7._at        : ICT_percen~e    =          60
            
            8._at        : ICT_percen~e    =          70
            
            9._at        : ICT_percen~e    =          80
            
            10._at       : ICT_percen~e    =          90
            
            11._at       : ICT_percen~e    =         100
            
            ------------------------------------------------------------------------------
                         |            Delta-method
                         |     Margin   Std. Err.      t    P>|t|     [95% Conf. Interval]
            -------------+----------------------------------------------------------------
                     _at |
                      1  |   .4894287     .11085     4.42   0.000     .2719667    .7068907
                      2  |   .8884117   .0479881    18.51   0.000     .7942701    .9825532
                      3  |   1.175533   .0317861    36.98   0.000     1.113176     1.23789
                      4  |   1.350793   .0516731    26.14   0.000     1.249423    1.452164
                      5  |   1.414192   .0681875    20.74   0.000     1.280424     1.54796
                      6  |   1.365729   .0861601    15.85   0.000     1.196703    1.534755
                      7  |   1.205405   .1207255     9.98   0.000     .9685695     1.44224
                      8  |   .9332193   .1810877     5.15   0.000     .5779673    1.288471
                      9  |   .5491723   .2680958     2.05   0.041     .0232304    1.075114
                     10  |   .0532637   .3804306     0.14   0.889     -.693053    .7995805
                     11  |  -.5545062   .5170702    -1.07   0.284    -1.568878    .4598657
            ------------------------------------------------------------------------------
            
            .
            end of do-file
            
            . do "C:\Users\domin\AppData\Local\Temp\STD18f4_000000.tmp"
            
            . marginsplot, recast(line) recastci(rarea) name(MarginsForeignOwnership, replace)
            
              Variables that uniquely identify margins: ICT_percentage
            I did it like Alan Neustadtl at minute 09:35 (https://www.youtube.com/watch?v=tJFzwFOKEcc) but I don't understand why he makes an interaction term between education and education?
            He also mentions that in his case it seems like there is an interaction effect.
            Well, also my "interaction term" is significant but does it make sense? I attached a picture how it looks like. As I said, I would really appreciate if you help me with the interpretation in my case.


            Kind regards
            Dominik
            Click image for larger version

Name:	Interaction.PNG
Views:	1
Size:	23.8 KB
ID:	1500137




            Comment


            • #7
              Please, if one of you can help me I would appreciate it

              Comment


              • #8
                Dominik:
                1) interacting a given predictor with itself aims at investigating whether a squared relationship exists with the dependent variable;
                2) as far as -ITC_percentage-, you can calculate the absolute maximum of your parabola via the following formula:
                -b (b=coefficient of the linear term, that is the first derivative of the parabola) divided by
                2a (a=coefficient of the squared term).
                In your case, you should obtain:
                Code:
                . di .0454914/(2*.0005593)
                40.668157
                If this percentage values (about 40.67%) falls within the range of your -ITC_percentage- variable, you can state that, other things being equal, the regessand reaches its maximum value when -ITC-percentage-=40.67% and then it decreases.
                Kind regards,
                Carlo
                (Stata 19.0)

                Comment


                • #9
                  Dear Carlo Lazzaro thank you very much for your support! That helped me a lot.

                  Comment


                  • #10
                    Carlo Lazzaro One last question regarding this topic. Because propensity score matching compairs (in my case) the Innovativeness of a group of individuals who adopted ICT with those who didn't, can I conclude the following: Firms who adopted ICT up to 40.67% are more innovative compared to the group who did not adopt ICT. And if firms have an ICT adoption index higher than 40.67% their innovativeness is lower compared to the group of individuals who did not adopt ICT?

                    That would be really helpful if you can answer that last question. Thank you so much for your help, Carlo!

                    Kind regards
                    Dominik

                    Comment


                    • #11
                      Dominik:
                      I would say that, other things being equal (ie, when adjusted for the other predictors), the dependent variable (say, innovation_attitude) increases until ICT adoption reaches 40.67% (where innovation_attitude reaches a maximum) and then decreases. Provided that this stuff if miles away from my research field, I think it would be interesting to explain in your paper why your analysis seems to support that, other things being equal, increasing the ICT_adoption beyond 46.67% does not produce an increase in innovation_attitude.
                      Kind regards,
                      Carlo
                      (Stata 19.0)

                      Comment


                      • #12
                        Dear Carlo Lazzaro thank you very much! That is indeed interesting, but how would you approach to investigate about that? By using interactions between the other indenpendent variables in my model and ICT adoption at the levels >46.67%?

                        Comment


                        • #13
                          Dominik:
                          no, I did not mean statistically, but comparing your results with the findings of other similar studies in your research field (something to be included in the Discussion section of your paper).
                          Kind regards,
                          Carlo
                          (Stata 19.0)

                          Comment

                          Working...
                          X