Dear Statalist community,
I have a dataset on the length of growing seasons (los) at different locations (id's) over multiple years. The locations shown below have two growing seasons per year. The variable meanlos indicates the long-term average los of this location's first and second season, respectively. I want to find out if it is the first or second season that typically is the long one.
E.g., I could create a new dummy variable "long" that is 1 if the season is long, and 0 otherwise. In this case, the variable would be 0 if firstseason == 1, and 1 if firstseason == 2, because meanlos is higher during the second season. Do you have advices on how to do that?
Thanks a lot!
I have a dataset on the length of growing seasons (los) at different locations (id's) over multiple years. The locations shown below have two growing seasons per year. The variable meanlos indicates the long-term average los of this location's first and second season, respectively. I want to find out if it is the first or second season that typically is the long one.
E.g., I could create a new dummy variable "long" that is 1 if the season is long, and 0 otherwise. In this case, the variable would be 0 if firstseason == 1, and 1 if firstseason == 2, because meanlos is higher during the second season. Do you have advices on how to do that?
Thanks a lot!
Code:
clear input double id int year byte(firstseason los) float meanlos 3 1982 1 4 4 3 1982 2 13 10.242424 3 1983 1 4 4 3 1983 2 13 10.242424 3 1984 1 5 4 3 1984 2 3 10.242424 3 1985 1 4 4 3 1985 2 12 10.242424 3 1986 1 2 4 3 1986 2 3 10.242424 3 1987 1 7 4 3 1987 2 10 10.242424 3 1988 1 6 4 3 1988 2 10 10.242424 3 1989 1 5 4 3 1989 2 7 10.242424 3 1990 1 3 4 3 1990 2 13 10.242424 end
Comment