Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Maximum likelihood estimator

    Dear Statalist users,

    I would like to know the assumptions of using the maximum likelihood estimation (MLE) for a time-series regression. Shall I, for example, check for stationarity, multicollinearity, ..etc.?
    Does the MLE control serial correlation and heteroscedasticity?


    Thanks a lot for your response.

    Emna

  • #2
    Any answer to my query please !

    Comment


    • #3
      You didn't get a quick answer partially because there is no answer to your question without a lot more information.

      There are maximum likelihood estimators that control for serial correlation heteroskedasticity and there are some that don't. Collinearity is a general issue in all estimation although some would say it has been overblown. Unless your model explicitly takes stationarity into account somehow, then a lack of stationarity can be a problem, although different fields place different emphasis on the stationarity assumption.

      MLE refers to maximizing a likelihood function. The problem is that it doesn't only work for one likelihood function, it works for many if not all likelihood functions. Thus, any model you can do in SEM can be estimated as a maximum likelihood estimator but this can be almost anything.

      Comment


      • #4
        Dear Phil Bromiley,

        Thank you for your attention to my post. Well, in fact I want to estimate linear regressions with bootstrapped MLE which handles quasi-stationary distributions. I build on this paper https://www.researchgate.net/publica..._distributions

        I have a total of 192 regressions to estimate and I wonder whether it is fine to go ahead with that method (My variables are not stationary only in few out of the 192 cases). I have already the code for which I get help from Jeff Pitblado (StataCorp) in Statalist. I just need to justify the use of the estimator with regards to serial correlation, heteroscedasricity, ..etc.


        Best,
        Emna

        Comment


        • #5
          I want to further precise that the work is a part of a paper which is under revision. Data are derived from experiments in laboratory (Experimental economics). It was the reviewer's suggestion to estimate individual-level regressions as a robustness check of the panel model results (in my case, this makes 192 time-series regressions). The reviewer also recommends the use of the Newey-West estimator to account for serial correlation and heteroscedasticity. The task is done but it was really very exhausting because I did apply the stationarity, serial correlation, multicollinearity, omitted variable bias tests case by case. Then, some variables are transformed into first difference so they get stationary. I corrected multicollinearity by deleting some variable(s) and if there is omitted variable bias according to the Ramsey test, I add new variables based on lagged values of the initial regressors. I repeat that step many times until I get satisfactory tests' results. Serial correlation was controlled at lag m=3 (I applied Stock and Watson rule of thumb m=0.75 T^(1/3) with T=45).

          So, the purpose of my post is to see whether another way exists to fulfill the reviewer's task but in faster, more efficient and at the same time in a convincing manner, without being obliged to transform variables and/or add new regressor(s) and/or delete some variable(s). I wonder if bootsrapped MLE can solve my issue.

          Best,

          Emna



          ​​
          Last edited by Emna Trabelsi; 29 Aug 2020, 08:46.

          Comment

          Working...
          X