Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Permutation tests with metareg produce different results each time I run, some of which state a p-value of 0

    Hello everyone,

    I'm having a bit of trouble using Monte Carlo permutation tests in conjunction with the metareg command, in an analysis of CCT program characteristics on effect sizes for education attendance under 27 program studies. Each time i run the spec it produces a different set of unadjusted and adjusted p-values. Running the spec in regression (4) three times with the permutation command gives the following results below. Each time the values are different; the unadjusted p values differ and don't match those in regression 4, the adjusted p values are different each time, and some are 0. How are these results to be interpreted?

    Code for reg 4 then permutation test:

    metareg primaryattendanceef Comp_Cat meets2 pnet yrs_treatment mother national mbimonthly primsub2015 condachiev supply, wsse(primaryattendancese)

    metareg primaryattendanceef Comp_Cat meetses pnet yrs_treatment mother national mbimonthly primsub2015 condachiev supply, wsse(primaryattendancese) permute(27, joint(Comp_Cat meetses pnet yrs_treatment mother national mbimonthly primsub2015 condachiev supply))



    Number of obs = 27
    Permutations = 27
    ------------------------------------
    | P
    primarya~f | Unadjusted Adjusted
    -------------+----------------------
    Comp_Cat | 0.037 0.037
    meetses | 0.667 1.000
    pnet | 0.000 0.000
    yrs_trea~t | 0.815 1.000
    mother | 0.259 0.963
    national | 0.037 0.370
    mbimonthly | 0.185 0.889
    prims~2015 | 0.111 0.704
    condachiev | 0.778 1.000
    supply | 0.000 0.000
    -------------+----------------------
    joint1 | 0.000
    ------------------------------------
    largest Monte Carlo SE(P) = 0.0929



    Number of obs = 27
    Permutations = 27
    ------------------------------------
    | P
    primarya~f | Unadjusted Adjusted
    -------------+----------------------
    Comp_Cat | 0.000 0.037
    meetses | 0.741 1.000
    pnet | 0.000 0.037
    yrs_trea~t | 0.741 1.000
    mother | 0.593 0.926
    national | 0.074 0.444
    mbimonthly | 0.296 0.852
    prims~2015 | 0.185 0.630
    condachiev | 0.593 1.000
    supply | 0.000 0.037
    -------------+----------------------
    joint1 | 0.000
    ------------------------------------
    largest Monte Carlo SE(P) = 0.0956



    Number of obs = 27
    Permutations = 27
    ------------------------------------
    | P
    primarya~f | Unadjusted Adjusted
    -------------+----------------------
    Comp_Cat | 0.000 0.148
    meetses | 0.704 1.000
    pnet | 0.000 0.111
    yrs_trea~t | 0.815 1.000
    mother | 0.296 0.963
    national | 0.074 0.519
    mbimonthly | 0.333 0.889
    prims~2015 | 0.148 0.667
    condachiev | 0.519 1.000
    supply | 0.037 0.037
    -------------+----------------------
    joint1 | 0.000
    ------------------------------------
    largest Monte Carlo SE(P) = 0.0962








    Metareg primary Independent variable = primary attendance effect size
    VARIABLES (1) (2) (3) (4)
    Compliance Severity 0.003 0.016** 0.017** 0.018***
    (0.007) (0.007) (0.006) (0.005)
    LAC dummy 0.008 0.012
    (0.027) (0.020)
    Africa dummy -0.008
    (0.037)
    Meets evidence standards -0.021 -0.022 -0.019
    (0.036) (0.033) (0.026)
    Baseline enrolment -0.343*** -0.337*** -0.330***
    (0.105) (0.098) (0.086)
    Years of exposure -0.001 -0.001 -0.001
    (0.005) (0.004) (0.003)
    Mother dummy -0.002 -0.002 0.005
    (0.025) (0.023) (0.018)
    National dummy -0.033 -0.034* -0.035*
    (0.020) (0.019) (0.017)
    Start-up dummy -0.002 -0.001
    (0.021) (0.018)
    Payment frequency 0.008 0.004 0.013
    (0.032) (0.025) (0.018)
    Average transfer 0.000 0.000 0.000
    (0.000) (0.000) (0.000)
    Achievement conditionality 0.001 0.001 -0.001
    (0.023) (0.021) (0.019)
    Supply component 0.060** 0.061** 0.070***
    (0.023) (0.022) (0.017)
    Constant 0.028 0.320*** 0.312*** 0.294***
    (0.021) (0.100) (0.093) (0.087)
    Observations 27 27 27 27
    Standard errors in parentheses
    *** p<0.01, ** p<0.05, * p<0.1

  • #2
    I don't know anything about metareg, but to get the same results over any series of monte carlo runs, you need to set seed to the same value before each run. There are exceptions, as when a command internally resets its own seed.

    Please in future posts make your code and output more readable by putting everything between code delimiters ([CODE] and [/CODE]), described in FAQ 12.
    Last edited by Steve Samuels; 09 Aug 2018, 13:31.
    Steve Samuels
    Statistical Consulting
    [email protected]

    Stata 14.2

    Comment


    • #3
      Hi Steve,

      Thank you for your reply. I've set the seed to a random number using 'set seed 43647743', but I'm still having the same problem as described before. Do you have any idea where I'm going wrong? Permutation tests are recommended by Higgins and Thompson (2004) to be used in conjunction with meta-regression to protect from spurious results and give more accurate p-values, so I would like to include it if possible, but I can't just pick the test result that best fits my hypothesis, as that wouldn't be very rigorous.

      Kind regards,

      Jon

      Comment


      • #4
        I'm far from expert in this area, but I observe:
        1. In Stata Journal article about metareg, the examples had 2,000 and 5,000 permutations. You specified 27.
        2. Zero unadjusted p-values are possible with a small number of permutations.
        3. You say that you set seed, but don't say that you set it before every metareg command.

        My advice is to do a single run of metareg with thousands of permutations and accept whatever the results are.

        Please in future posts, make your code and results more readable by enclosing all between code delimiters, [CODE] and [/CODE]., as requested in FAQ 12.

        Steve Samuels
        Statistical Consulting
        [email protected]

        Stata 14.2

        Comment

        Working...
        X