I have a workflow that requires frequently collapsing data according to different specifications and then running regressions. The data are very large and typically take >15-30 seconds to do, what feels like, anything. In principle, is it computationally less expensive to use preserve/restore repeatedly or simply clear and re-load data sets every time?
I know that my workflow is probably more cumbersome than it needs to be and that the real answer is to bite the bullet and learn how to leverage mata but it would be helpful to know for future reference.
I know that my workflow is probably more cumbersome than it needs to be and that the real answer is to bite the bullet and learn how to leverage mata but it would be helpful to know for future reference.
Comment