Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to retry the command conducted with "simulate" when an error occurs

    I am using the command simulate to check the size of a statistical test.
    Specifically, my code looks like

    Code:
    *** This is just an example presenting the structure of my command ***
    program my_prog, rclass
        version 18
        drop _all
    
        set obs 3000
        gen Y = rnormal()
        gen X = rnormal()
    
        a command for the test
        
        return scalar reject = r(reject)    
    end
    
    simulate reject = r(reject), reps(1000): my_prog
    Depending on the generated X and Y, the testing command sometimes returns an error, and I know that the error can be solved only by increasing the sample size.
    But, I want to simulate the testing method only in a moderate sample size.

    Therefore, I am looking for a way to retry the command my_prog when an error occurs.
    In other words, I want that my_prog regenerates the data and implements the test again when the simulation result is "x" not "."
    Is there a good way to do this?

  • #2
    Here is the general approach. It is required that you can identify somewhere in the code that calculates the test a place where the calculation fails. Then you would use a scheme like this:

    Code:
    *** This is just an example presenting the structure of my command ***
    program my_prog, rclass
        version 18
        local done 0
        while !done {
            drop _all
            set obs 3000
            gen Y = rnormal()
            gen X = rnormal()
            
            // CODE TO CALCULATE THE TEST BEGINS HERE
            ...
            // AT THIS POINT THE CALCULATION MAY FAIL
            capture    command that may fail
            local done = (c(rc) == 0)
        }
        return scalar reject = r(reject)    
    end
    
    simulate reject = r(reject), reps(1000): my_prog
    This code will cause program my_prog to keep re-trying the calculation until it achieves success. And it will do that without triggering an "x" iteration of -simulate-. Be warned, however, that this loop until success could be very long, or even infinite, depending on how often this method fails.

    I would also note that I probably would not use this approach to quantifying the performance of a test that sometimes fails. The frequency of failure is, itself, at least as important a test characteristic as its size. If the problem is that failure occurs sufficiently often that with 1000 reps of your original code you don't have enough successful outcomes to give a sufficiently precise estimate of the test's size, then I would say that the test is probably not useful, even if its size when successful is quite satisfactory. If you find that judgment too harsh, we could dial it back a bit and just say that to report the size of the test in successful cases without also reporting its failure rate would not be good science.

    Comment


    • #3
      Clyde Schechter, your suggestion works perfectly, and I think your additional comment is invaluable. As you mentioned, the frequency of failure is also a part of performance. Fortunately, the test I am evaluating is not mine. Anyway, it seems to be better to scrutinize the reason for the poor implementation rather than ignore it. Thank you so much.

      Comment

      Working...
      X