Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Re: xtpedroni

    Hi,

    I have implemented the command xtpedroni in Stata to check for cointegration. xtpedroni is a user written command and appears in 'The Stata Journal' - https://www.econ.uzh.ch/dam/jcr:0000...3e4/sj14-3.pdf (page 684 in the journal or page 238 in pdf)

    I am unsure how to interpret the cointegration statistics. All test statistics are distributed N(0,1) under a null of no cointegration.


    Stata returns:
    Test Stats Panel Group
    v 1.29 .
    rho -7.286 -5.6
    t -11.37 -12.46
    adf -7.311 -6.243

    I would really really appreciate any assistance, please?

    Kind regards,

    Paula

  • #2
    So I e-mailed the author of the xtpedroni code and here is what he had to say:

    "The test statistics for the cointegration tests have been normalized to the N(0,1) distribution (as the command states under the output), and so to find the p-values you would follow the exact same procedure as you would with any other standard normal test statistic (z score). Look into the normal(z) function of stata."

    Again, I am a little unsure of the next step of interpreting the cointegration statistics. Can anyone help?

    Comment


    • #3
      The values you see are normally distributed. If you want the corresponding p-values, you can type e.g.
      Code:
      di 1-normal(1.29)
      Which will give you the pvalue for the panel-v, .09852533. Hence, you cannot reject the H0 of no cointegration at 5%.

      Comment


      • #4
        Originally posted by Jesse Wursten View Post
        The values you see are normally distributed. If you want the corresponding p-values, you can type e.g.
        Code:
        di 1-normal(1.29)
        Which will give you the pvalue for the panel-v, .09852533. Hence, you cannot reject the H0 of no cointegration at 5%.
        Hi Jesse,

        Thank you for your response.

        I have applied your method to other journal articles in the hope of replicating their results, however it doesn't. I am at a loss as to what the issue could be as your command seemed perfect.
        Last edited by Paula Castellanos; 29 Aug 2016, 09:31.

        Comment


        • #5
          Originally posted by Paula Castellanos View Post

          Hi Jesse,

          Thank you for your response.

          How would I check 1% or 10% significance?
          The same way. E.g.
          0.0985 > 0.01 (not significant at 1%)
          0.0985 > 0.05 (not significant at 5%)
          0.0985 < 0.10 (significant at 10%)

          Comment


          • #6
            In general, given a p-value for a test statistic, you compare the p-value to the desired significance level. Since 0.098 > 0.05 it is not significant at the 5% level. Since 0.098 > 0.01 it is not signficant at the 1% level. Since 0.098 < 0.10 it is significant at the 10% level.

            With that said, I think perhaps the test is meant to be a two-tailed test, so the p-value of a z-score of 1.29 is really
            Code:
            . di 2*(1-normal(1.29))
            .19705066
            and the results are not significant at the 10% level.

            Note that all the other tests will be significant at all of the usual levels.
            Code:
            . di %10.8f 2*(1-normal(5.6))
            0.00000002
            My knowledge of xtpedroni and the interpretation of the various tests is non-existent, however, so I'll leave it to others to explain the meaning of these differing results.

            Comment


            • #7
              Just a cautionary note which might (or might not) resolve the issue above. The p-value that Jesse obtains in #3

              di 1-normal(1.29)
              is a one-tailed test. Stata in most cases reports two-tailed p-values. So you need to multiply your p-values by two for correct inference

              Code:
              2*(1-normal(1.29))
              Edit: Crossed with William

              Comment


              • #8
                Originally posted by Andrew Musau View Post
                Just a cautionary note which might (or might not) resolve the issue above. The p-value that Jesse obtains in #3



                is a one-tailed test. Stata in most cases reports two-tailed p-values. So you need to multiply your p-values by two for correct inference

                Code:
                2*(1-normal(1.29))
                Edit: Crossed with William
                Hi Andrew,

                Thanks for your response (and thanks to everyone else too).

                When I try your code in Stata it says, '2 is not a valid command name'.

                Comment


                • #9
                  Apologies, should be


                  Code:
                  di 2*(1-normal(1.29))
                  where di is short for display. Also, you may want to take a look at this calculator to make sure that you are getting the correct values with Stata

                  http://www.socscistatistics.com/pval...tribution.aspx
                  Last edited by Andrew Musau; 29 Aug 2016, 09:50.

                  Comment


                  • #10
                    Originally posted by Andrew Musau View Post
                    Apologies, should be


                    Code:
                    di 2*(1-normal(1.29))
                    where di is short for display.
                    Thanks.

                    In one journal I am trying to replicate they run Pedroni statistic. They get a value of -0.53 and their associated p-value is 0.35. Unfortunately, neither your or Jesse's solutions return a p-value of 0.35 when I run them. I am at a loss.

                    I thank you both for your patience and trying (and excuse my poor English)

                    Paula

                    Comment


                    • #11
                      You should check the distribution of the Pedroni statistic in that paper. The one produced by Stata is distributed as N(0,1) but maybe if someone else programmed it in a different software, then it may follow a different distribution. That said, the values you report are very close to a t-distribution with one degree of freedom

                      Code:
                        di ttail(1, 0.529)
                      .34511755
                      Alternatively

                      Code:
                      . di 1-ttail(1, -0.529)
                      .34511755
                      Say I round off 0.529 to 2 decimal places, I have 0.53. On the other hand, the p-value rounds off to 0.35 (one tailed this time).

                      Hope this helps!
                      Last edited by Andrew Musau; 29 Aug 2016, 10:18.

                      Comment


                      • #12
                        Originally posted by Andrew Musau View Post
                        You should check the distribution of the Pedroni statistic in that paper. The one produced by Stata is distributed as N(0,1) but maybe if someone else programmed it in a different software, then it may follow a different distribution. That said, the values you report are very close to a t-distribution with one degree of freedom

                        Code:
                        di ttail(1, 0.529)
                        .34511755
                        Alternatively

                        Code:
                        . di 1-ttail(1, -0.529)
                        .34511755
                        Say I round off 0.529 to 2 decimal places, I have 0.53. On the other hand, the p-value rounds off to 0.35 (one tailed this time).

                        Hope this helps!
                        Hi Andrew,

                        I appreciate your time. So your code worked for this value and corroborated the papers p-value. I tried verifying their other p-values but was unable too and I have double checked the distribution used by the paper.

                        Thank you for your time though

                        Comment


                        • #13
                          Hey all,
                          I read the above corresponding and then read https://www.econ.uzh.ch/dam/jcr:0000...3e4/sj14-3.pdf in The Stata Journal (2014) 14, Number 3, pp. 684–692
                          I am still not sure whether the -xtpedroni- command for panel co-integration is a one or two-tailed test
                          Does anyone know?
                          Many thanks,
                          Anat

                          Comment


                          • #14
                            Hi Anat. Usually there has to be a justification for reporting a one-tailed p-value, and most test commands would otherwise report two-tailed p-values. In this paper, the authors do not explicitly state the p-values, but you can infer that these are two-tailed tests. On p.691, the authors state

                            All the tests, except the panel t and ADF statistics, are significant at least at the 10% level
                            .

                            Taking the panel t test statistic (t=1.434), note that the one tailed p-value is significant at the 10% level whereas the two-tailed is not.

                            Code:
                            *ONE-TAILED
                            .  di (1-normal(1.434))
                            .07578613
                            
                            *TWO-TAILED
                            .  di 2*(1-normal(1.434))
                            .15157226

                            Comment


                            • #15
                              Hi Andrew, I can see your point, however, I have contacted Timothy Neal, the author of this code asking him about it here is his my question and his reply:
                              I am using your command –xtpedroni- in Stata 14.1 In order to interpret the results I am not sure whether the test is one or two-sided
                              Prof. Neal's answered:
                              It should be one-tailed..
                              so.. I really don't know what to say..

                              Comment

                              Working...
                              X