Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Aleksandra Anic
    started a topic different results every time I run do file

    different results every time I run do file

    Dear all,

    I have pooled cross-sectional data for 4 years. Some individuals are only in one year, but some are in 2, 3 or 4 years. I control for time effects, i.e. I use dummy variable to control for years. Every time I run probit model I get slightly different results, e.g. estimated coefficients are ones 0.143, second time 0.150, third time 0.148, etc. I already read that it is problem of sorting variables that are not uniquly identified. My question is how should I sort variables in order to get same results each time. I tried the following twp sorting, but it does not work? I am not interested in panel data analysis, I want to do basic probit analysis.

    sort idperson year
    sort year idperson

    Also, my dependent variable is employed/not employed and explanatory variables are education, age, marital status, children, etc.

    Thank you.

    Best,
    Aleksandra

  • Kate Lussy
    replied
    1. Yes, it just did not make sense to post since most of the observations disappeared through the code. I should have nonetheless. I apologise.

    2. Yes, that is true. I wanted to delete rename, for example, but I thought that maybe someone would relate the problem to the fact that I have three regressions and have not renamed the variables.

    3. Of course, I understand. I will approach him.

    4. I really appreciate your response. I did not realise that what I posted was so confusing. I guess I needed to step back. Thank you for taking the time to give me advice.

    (I have also deleted the other comment)

    Leave a comment:


  • Nick Cox
    replied
    Kate Lussy You seem to have posted three versions of the same question in the last few hours. That's really not a good idea. We give advice on repeated postings at .https://www.statalist.org/forums/help#adviceextras #1 (in short, please don't).

    The underlying issue is: Why is no-one answering any of these?

    1. There is nothing reproducible in any of these posts, as no example data are given. Sure, that's a tough call, as your full dataset is possibly enormous. But equally it's a tough call on readers to know what is going on without a reproducible example.

    2. Stretches of long code are hard even for experienced users to digest. But even looking at this code I see lots of detail that is clearly irrelevant to your question. In particular, renames of your variables are neither here nor there.

    3. Code chunks with unexplained community-contributed commands (here winsor2) don't help. Your code indicates that you're making strong use of asreg and asrol. Well, there is just one person who is an authority on those commands, their author. You need to ensure that he sees your questions.

    4. You're ignoring advice about formatting code readably.

    Many people watch the forum, but they peel off questions that they can answer easily and ignore the rest.

    We appreciate that you are anxious for an answer. We can guess that you're working towards a deadline or target date. Who isn't?

    None of the above is helping you, and I know that. I would answer your question directly if I could!

    But, minimally, is it nicer to ignore you or to try explaining why it is difficult even to try to answer this kind of post? .

    In other forums there is advice that bites hard: https://stackoverflow.com/help/mcve Examples should be minimal, complete and verifiable.

    Leave a comment:


  • Clyde Schechter
    replied
    Be sure to eliminate the -set sortseed- command when you do the tests I suggested in #7. Using -set sortseed- is just covering up the problem, not solving it.

    The appropriate use of -set sortseed- is when you are doing calculations that are supposed to be non-deterministic and sort-dependent, but you want to be able to replicate the specific results that you got. But when you are getting non-deterministic results from calculations that are supposed to be deterministic, -set sortseed- just sweeps the problem under the rug.

    Leave a comment:


  • Aleksandra Anic
    replied
    Thank you for the such extansive epxlanation. I tried firstly with set sortseed # at the begining of the do file, and it appears to be ok. I chacked several regressions for 3 times, and results are same. Anyway, I will do what Clyde suggested in order to be complietly sure what is going wrong.

    Thank you.

    Best,
    Aleksandra

    Leave a comment:


  • Clyde Schechter
    replied
    Also I realized that I when I save the database with all the prepared variables and use that saved database as input database for regression, I get the same results. But if I run the whole do file, where the varaibales are generated, the results differ.
    So, if I understand you correctly, if you generate your variables and save the data base, and then re-use that saved database to do the probit analysis several times, the results are the same each time. But if you re-generate the variables each time, then you get varying results each time. That indeed proves my point in #2 that there is something sort-order-dependent in the way you are calculating these variables. So something is wrong.

    Looking at the examples of the calculations you show, none of those should be the source of the problem. dep and wkm involve no sorting at all, and all of the calculations for those commands are done independently within each observation.

    The commands for deps, wkms, and pen require the data to be sorted, but they use only the -total()- function which should, in theory, be independent of the sort order of the data. Now, it is true that finite-fixed-precision computer addition is not strictly speaking commutative. The order in which things get totaled up can matter, but the problem arises only with fairly pathological data in which the running total gets so large that subsequent additions of small values to it have no effect because the order of magnitude of the running total is so much greater than what is being added, that what is being added gets shift-rounded down to zero. But that requires a variable whose values range over many orders of magnitude, or an extremely large number of observations being added. Since your dep and wkm varibles are 0/1 dichotomies, and I'm assuming you do not have a data set with quadrillions of observations in it, the totaling up of dep and wkm into deps and wkms should not be sort-order dependent. It is conceivable that the variable pen is problematic, but it would surprise me. What are the largest and smallest absolute values of pen? What is the largest number of observations for a single year idhh group? I'd be astonished if this is really what's going wrong, but it's simple enough to check these out.

    So the problem is probably somewhere else. If you cannot identify the source of the problem by reviewing the code, I would do the following:

    1. Modify your starting data set by including a new variable: -gen long obs_no = _n-. obs_no will now be a unique identifier in your data set, and it will remain so unless you use -expand- or -merge- or -append- along the way to bring in new observations.

    2. In all your -bys...- commands, add (obs_no) at the end of the sort key. Include obs_no at the end of the sort key in any explicit -sort- commands. This will provide a reproducible and unique sort order for the data.

    3. Run the do-file to create all the variables. (You can skip doing the actual -probit-.) Save the results in a data set, sorted on obs_no. This is your reference data set.

    4. Remove the (obs_no) references from all your -bys-....- commands and remove obs_no from the sort keys of your explicit -sort- commands. Do not, however, eliminate the obs_no variable.

    5. Now re-run the do-file. When it finishes, sort the data on obs_no.

    6. Now use the -cf- command to compare these results to the reference data set you saved the first time.

    The point of this is that the -cf- results will tell you which variables are changing. Then you can focus your scrutiny of the code on just those commands that are involved in creating those variables.

    Once you know which variable(s) are actually indeterminate, then you can go back into the code and insert some -summarize- commands after each command that changes them and run the code a few times. By seeing where the -summarize- results first differ from one run to the next you will be able to identify the (first) place where the calculation is indeterminate. Presumably then you'll be able to fix that. Then try it again--perhaps there are more points of indeterminacy, or perhaps there is only that one place where it's happening.

    Leave a comment:


  • Sergiy Radyakin
    replied
    Aleksandra, if you are using the sort command (as you indicated in post #1), or any other command that uses sort inside, then you are in fact using random numbers and Kreshna's advice on setting the sortseed is relevant. Sorting with option stable is another alternative. This is also consistent with your described behavior of loading the data from file, pointing that the problem is with data preparation (sorting) and not your estimations. Best, Sergiy Radyakin

    Leave a comment:


  • Aleksandra Anic
    replied
    Dear Kreshna,

    Thank you for your answer. I do not have any random number generator.

    Best,
    Aleksandra

    Leave a comment:


  • Aleksandra Anic
    replied
    Dear Clyde,

    Thank you for your help. I have many sort generated variables. E.g. number of dependents and number of working age in the household:

    gen dep = dag<=17 | dag>=65 //dependent household member
    gen wkm = dag>17 & dag<65 // working age member
    bys year idhh: egen deps=total(dep) // n. of dependants
    bys year idhh: egen wkms=total(wkm) // n. of working age members

    Or total pensions in the household
    bys year idhh: egen penT=total(pen)

    I do not understand what should I do in order to fix the problem if it is problem with sorting?
    Also I realized that I when I save the database with all the prepared variables and use that saved database as input database for regression, I get the same results. But if I run the whole do file, where the varaibales are generated, the results differ.

    I do not have any random number generator as Kreshna mentioned.

    Thank you.

    Best,
    Aleksandra

    Leave a comment:


  • Kreshna Gopal (StataCorp)
    replied
    You may be getting different results in different runs because of the randomness in sort. sort randomly breaks any ties in the key values. To reproduce exactly the same results, try set sortseed #. The latter specifies the seed of the random-number generator that breaks ties in sort. If the commands you are using involve randomness other than with sort, set seed # is also needed to reproduce the same results.

    I hope this solves the problem.

    -- Kreshna

    Leave a comment:


  • Clyde Schechter
    replied
    Well, I suspect you have only partially diagnosed your problem. It is true that irreproducibility is often the result of indeterminate sorting. But -probit- should not be subject to that problem: the results really should be the same regardless of the sort order of the data. It is more likely that something you are calculating before you get to the -probit- command depends on the sort order. The solution, then, is to fix that calculation so that it is independent of the sort order, not to stabilize the sort. Stabilizing the sort can be done easily, but it just sweeps a huge problem under the rug.

    So scour your code for any calculations that might be sort order dependent. Any code that references subscripted variables, _n, or _N will be sort order dependent. Beware of any looping over observations: this is often sort order dependent as well. -collapse- with (first), (last), (firstnm), or (lastnm) is, evidently, sort-order dependent. Harder to spot might be programs you call that do something sort-order dependent internally.



    Leave a comment:

Working...
X