This is a very broad conceptual question, so thank you in advance to those willing to indulge it.
I work primarily with time-series economic data. In many different contexts, I use Stata to estimate trend components. Sometimes these are univariate (using the "ucm" command), and sometimes these are common underlying components (using e.g. "sspace" or "dfactor").
I understand the trade-offs of predicting trends using "smethod(smooth)". Since this is a two-sided method, it utilizes information that would not be present in real-time.
My question is: are there other trade-offs to bear in mind when choosing between "onestep" and "filter"? Put another way, why wouldn't "filter" (with uses both past and contemporaneous information) always be the preferred choice?
I work primarily with time-series economic data. In many different contexts, I use Stata to estimate trend components. Sometimes these are univariate (using the "ucm" command), and sometimes these are common underlying components (using e.g. "sspace" or "dfactor").
I understand the trade-offs of predicting trends using "smethod(smooth)". Since this is a two-sided method, it utilizes information that would not be present in real-time.
My question is: are there other trade-offs to bear in mind when choosing between "onestep" and "filter"? Put another way, why wouldn't "filter" (with uses both past and contemporaneous information) always be the preferred choice?