Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Panel data: Issues with stationarity

    Hi,

    I am currently wrighting my master thesis, and I am having some issues with my analysis. I am analysing stock market performance of firms, dependent on different kind of ownership structures. One of my analysis is to determine how ownership stake of the founder affects stock market performance.

    I have panel data of 500 companies over 18 years, with three years monthly observations of stock market return and ownership(share of total ownership). Typically, the founders' ownership stake starts at 50-100%, but after five years it is typically roughly 0-20%. Hence, my variable is trending downwards, but it is predictable in the way that the founders rarely increase ownership, they almost always reduce it. Thus, the variable looks somewhat like a downward staircase. To my understanding, "Ownership" is therefore most likely non-stationary.

    I have thought of using Fixed Effects, but I am having trouble on how to deal with the following:
    - First of all, using monthly data over several years, my panel data is only weakly balanced. Can i use the Levin-Lin-Chu test if I change the time variable to be 1-36 (making it strongly balanced) for each firm, instead of the monthly date (ie. 01.02.2003)?
    - In this case, the Levin-Lin-Chu test states that "Ownership" is non-stationary, but that the first difference ("D1.Ownership") is stationary. However, when taking the first-difference of "Ownership", almost all values for this variable becomes zero (the founders do not sell a small amount of shares several times, but rather big chunks few times). As most values turn zero, it seems like it's hard to see a relationship between stock market performance and "Ownership". Do you have any reccomendation on how to solve this?
    - Given that "D1.Ownership" is stationary, can I use it in my xtreg or does it have to be cointegrated with stock market return?

    Thank you in advance,
    Jessica

  • #2
    You have a large \(N\) and small \(T\) data set, so you can ignore stationarity and proceed with the fixed effects regression at levels.

    Comment


    • #3
      Originally posted by Andrew Musau View Post
      You have a large \(N\) and small \(T\) data set, so you can ignore stationarity and proceed with the fixed effects regression at levels.
      Thank you Andrew! I really appreciate your feedback. What is the "rule" for the values in which N and T is large/small enough? Just to be sure; T = 36 (3 years, montlhy observations).

      Comment


      • #4
        You simply check whether \(N\) > \(T\) or \(T\) > \(N\). In your case, you have \(N\) being far larger than \(T\). Stationarity is usually a problem if the opposite holds.

        Comment


        • #5
          Originally posted by Andrew Musau View Post
          You simply check whether \(N\) > \(T\) or \(T\) > \(N\). In your case, you have \(N\) being far larger than \(T\). Stationarity is usually a problem if the opposite holds.
          I understand. Thanks!

          Comment


          • #6
            Originally posted by Andrew Musau View Post
            You simply check whether \(N\) > \(T\) or \(T\) > \(N\). In your case, you have \(N\) being far larger than \(T\). Stationarity is usually a problem if the opposite holds.
            Hi, just found this thread. I was wondering if this still stands for when you're analysing macroeconomic trends like GDP growth, I have a panel datatset for 15 countries from 2007-2020. Should I not test for stationarity?

            Comment


            • #7
              No need to, you still have \(N>T\). However, your sample size is small with \(N\)=15 and you cannot cluster your standard errors when estimating a FE model. You may want to consider

              Code:
              help wildbootstrap
              wildbootstrap performs wild cluster bootstrap (WCB) inference for linear hypotheses about parameters in linear regression models. These hypotheses can be simple or
              composite. When the assumptions required for the consistency of the cluster-robust variance estimator do not hold, the WCB is a good alternative.

              Comment


              • #8
                Originally posted by Andrew Musau View Post
                No need to, you still have \(N>T\). However, your sample size is small with \(N\)=15 and you cannot cluster your standard errors when estimating a FE model. You may want to consider

                Code:
                help wildbootstrap
                Thank you very much Andrew.

                Comment

                Working...
                X