I used 5-point Likert Scale questions as indicator for latent valiables Socialization, Externalization, Combination, and Internalization (SECI)

These SECI latent variable are then used as indicator for another latent variable Ability Array

I've trying to estiimate this seemingly simple model to no success....It seems I have an identification problem but my manifest variables are quite large relative to the free parameters.

Code:

sem (Socialization -> Social_3, ) (Socialization -> Social_4, ) (Socialization -> Social_6, ) (Extnalization -> External_1, ) (Extnalization -> External_2, ) (Extnalization -> External_3, ) (Combination -> Combination_1, ) (Combination -> Combination_2, ) (Combination -> Combination_3, ) (Combination -> Combination_6, ) (Internalization -> Internal_1, ) (Internalization -> Internal_2, ) (Internalization -> Internal_4, ) (Internalization -> Internal_6, ) (Ability -> Socialization, ) (Ability -> Extnalization, ) (Ability -> Combination, ) (Ability -> Internalization, ), latent(Socialization Extnalization Combination Internalization Ability ) nocapslatent

Warning: The LR test of model vs. saturated is not reported because the

fitted model is not full rank. There appears to be 7 more fitted

parameters than the data can support.

convergence not achieved

fitted model is not full rank. There appears to be 7 more fitted

parameters than the data can support.

convergence not achieved

Everything fine, but I would like to know how to get the stored result for the Gini coefficient? I could just copy and paste it, but I need to make a table with 96 Gini coefficients, and I really just want to loop over it and add it to a table.

Hope someone can help me with this one Thanks!]]>

However, the generated document only include the existed values (and the frequency etc.) of the dataset. See the snapshot below:

Array

Apparently, there're more other types of drugs in the defined value label. As a data dictionary (codebook), should it include all pre-defined labels from all value labels?

Or, perhaps I'm misinterpreting the purpose of 'wordcb', while my boss is asking me to generate a codebook that include all pre-defined value labels for using as a reference manual (frequency or percentage is not compulsory).

Thank you.]]>

I'm building structural equation models with SEM builder and I'm getting an error message when I try to create a path to indicate moderation from one latent variable to a path between two latent variables - "May only create path to path with observed varibales". How do I work around this?

Thank you,

Teresa]]>

I ran the following code in stata and inputting the design matrix into the stata data editor:

steppedwedge, power incomplete(1) alpha(0.05) rho(0.0267183) m(75) mu1(7.6) sd1(3.9) mu2(10.4) sd2(5.0)

- m= 75 because the mean cluster size is 75.
- rho = 0.0267183, the ICC is 0.0267183 analysed from a routintely-collected dataset of individual-level data
- mu1 and sd1 7.6 (3.9)
- mu2 and sd2 = 10.4(5.0) - these were from a previous RCT
- I wanted three steps
- Beta = 0.80
- Alpha = 0.05

Some of the variations give me large sample sizes but the power is far more than I needed. I was aiming for 80% power, but most of these variations give me over 90% power.

Code:

clear input str31 var1 float(var2 var3) 1 1 1 0 1 1 0 0 1 end

Code:

clear input byte var1 float(var2 var3) 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 1 end

Code:

clear input byte var1 float(var2 var3) byte var4 . 1 1 1 . 1 1 1 . 1 1 1 0 . 1 1 0 . 1 1 0 . 1 1 0 0 . 1 0 0 . 1 0 0 . 1 end

Code:

clear input byte var1 float(var2 var3) byte var4 . 1 1 1 0 . 1 1 0 0 . 1 end

However, the nature of my dataset is different. It comprises observations of homes that were traded at least once between 2006 and 2020. Descriptive analysis revealed that about half of these homes were traded only once, approximately 30% were traded twice, and the remaining 20% were traded between 3 to 6 times within this 15-year timeframe. For homes involved in more than one transaction, the first transactions could occur at any point between 2006 and 2019, with subsequent transaction(s) happening between 2007 and 2020. For instance, Home X was traded in 2006 and 2012, Home Y in 2009, 2015, and 2018, and Home Z was traded once in 2010.

The dependent variable is the transaction price in log form. My key independent variable is a binary variable indicating whether a purchase is made using cash or a loan. I also have some time-variant variables at the neighborhood level and a few time-invariant variables representing housing size, age, and structural type.

Could you suggest a specific econometric model, along with a corresponding Stata function or R package, that would be suitable for analyzing this unique dataset? Do I also have to deal with spatial autocorrelation in my modeling analysis as some homes are geographically close to each other?]]>

I am currently working with a dataset with more then 700 variables and more then 400.000 observations.

I want to drop observations where more then 100 of the variables are missing.

I tried some things with the egen function but I don't get the right command.

Can someone help me? I would say that it is not that hard.

Thanks in advance!]]>

I am a graduate student using interval regressions for the first time. The data I am using is from the Behavior Risk Factor Surveillance System (collected by the CDC). My dependent variable is income, which is collected from respondents in ordinal categories (i.e., it is censored from collection). In following the Stata help file, I have created two dependent variables for my interval regressions, Depvar(1) equal to the lower bound for each level of the original income variable and Depvar(2) equal to the upper bound for each level of the original income variable. Here is an example:

Original level 3 of the income variable "_incomg1" is collected as "$25,000 to < $35000." For level 3, I have set Depvar(1) equal to 25,000 and Depvar(2) equal to 34,999.

I have two questions for the group that I would greatly appreciate help with:

1. Coming directly from the survey language, the highest income category is top coded (i.e., "$200,000 or more"). I am curious how to decide on the

1: Less than $15,000

2: $15,000 to < $25,000

3: $25,000 to < $35,000

4: $35,000 to < $50,000

5: $50,000 to < $100,000

6: $100,000 to < $200,000

7: $200,000 or more

9: Donâ€™t know/Not sure/Missing

2. Given the benefits of using log transformed income as opposed to income directly as a dependent variable, I would prefer to use ln(income) for my project. Given the ordinal categories, and the structure of dependent variables for interval regressions, I am wondering how to do this properly. Is it as simple as generating a new Depvar(1) and (2) equal to the natural log of the original Depvar(1) and (2), as I would if it were continuous? ex. gen logDepvar1 = ln(Depvar1)

---

Using the system "auto" dataset as an example, I have recoded the price variable into ordinal categories, with the top variable as "$12,000 and more." I have roughly sorted this variable into equal categories, as found below:

sysuse auto

recode price (min/3999 = 1) (4000/4399 = 2) (4400/4899 = 3) (4900/5799 = 4) (5800/8999 = 5) (9000/11999 = 6) (12000/max = 7), into(pricecats)

recode price (min/3999 = 0) (4000/4399 = 4000) (4400/4899 = 4400) (4900/5799 = 4900) (5800/8999 = 5800) (9000/11999 = 9000) (12000/max = 12000), into(lowprice)

recode price (min/3999 = 3999) (4000/4399 = 4399) (4400/4899 = 4899) (4900/5799 = 5799) (5800/8999 = 8999) (9000/11999 = 11999)

My questions, then are

1. If I wanted to use this ordinal price variable for interval regressions, and I didn't have the original price values, how could I top code level 7 for Depvar(2)? (Bolded in above code)

2. If I wanted to set lowprice (i.e., Depvar1) and highprice (i.e., Depvar2) to the log(price), how would I go about doing this?

---

I hope I have provided all the information needed to help me with these questions. I appreciate any and all help you all can provide to me.

Thanks,

Hannah

]]>

]]>

I am using cii to calculate a set of binomial confidence intervals

A: cii prop 6162 23

B: cii prop 100 57

I wish to then use r(mean), r(ub) and r(lb) in subsequent calculations, but the calculations combine post estimation values from both proportions

It is possible to store post estimation values from A, so I can combine them with post estimation values from B in a single calculation? E.g.

disp r(ub)[A]/r(lb)[B], or similar

Thanks!

]]>

I have a problem with my Panel data.

I have an important variable "Env" that IS supposed to be significant (low p-value) but I noticed that there is quite a big difference between the random and fixed effect. Differences for the yearly dummy variables are (I believe) not a problem.

Env is the updated version data and Env2 is the "previous" version data. Both are on a scale of 0 to 100.

Env2 is significant in case of random effect and in case of fixed effect. I find it quite difficult to imagine that the slight differences between the two variables justifies that Env is not significant in the fixed effect case.

He's the "problematic" code

Code:

xtreg ROA Soc Env Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, fe estimates store fixed xtreg ROA Soc Env Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, re estimates store random hausman fixed random, sigmamore

Code:

. xtreg ROA Soc Env Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, fe Fixed-effects (within) regression Number of obs = 534 Group variable: company_ Number of groups = 89 R-squared: Obs per group: Within = 0.0903 min = 1 Between = 0.0116 avg = 6.0 Overall = 0.0028 max = 10 F(12,433) = 3.58 corr(u_i, Xb) = -0.3980 Prob > F = 0.0000 ------------------------------------------------------------------------------ ROA | Coefficient Std. err. t P>|t| [95% conf. interval] -------------+---------------------------------------------------------------- Soc | -.1950134 .0420047 -4.64 0.000 -.277572 -.1124549 Env | .0555025 .037416 1.480.139-.018037 .129042 Y23 | 2.487787 1.645604 1.51 0.131 -.7465792 5.722152 Y21 | 2.30636 .9572937 2.41 0.016 .4248395 4.18788 Y20 | -.589413 1.001768 -0.59 0.557 -2.558346 1.37952 Y19 | .3879365 1.141829 0.34 0.734 -1.856279 2.632152 Y18 | 2.255327 1.277438 1.77 0.078 -.255424 4.766078 Y17 | 1.223604 1.395741 0.88 0.381 -1.519666 3.966875 Y16 | .6098173 1.505278 0.41 0.686 -2.348742 3.568377 Y15 | -.7039242 1.54012 -0.46 0.648 -3.730966 2.323117 Y14 | -.4803801 1.61304 -0.30 0.766 -3.650743 2.689982 Y13 | -.2086145 1.877213 -0.11 0.912 -3.898198 3.480969 _cons | 13.53602 2.58561 5.24 0.000 8.454115 18.61793 -------------+---------------------------------------------------------------- sigma_u | 12.144308 sigma_e | 6.2508888 rho | .79055495 (fraction of variance due to u_i) ------------------------------------------------------------------------------ F test that all u_i=0: F(88, 433) = 7.54 Prob > F = 0.0000 . estimates store fixed . xtreg ROA Soc Env Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, re Random-effects GLS regression Number of obs = 534 Group variable: company_ Number of groups = 89 R-squared: Obs per group: Within = 0.0786 min = 1 Between = 0.0125 avg = 6.0 Overall = 0.0284 max = 10 Wald chi2(12) = 37.66 corr(u_i, X) = 0 (assumed) Prob > chi2 = 0.0002 ------------------------------------------------------------------------------ ROA | Coefficient Std. err. z P>|z| [95% conf. interval] -------------+---------------------------------------------------------------- Soc | -.1443823 .0368393 -3.92 0.000 -.216586 -.0721785 Env | .0937023 .0334626 2.80 0.005 .0281168 .1592878 Y23 | 2.700422 1.64934 1.64 0.102 -.5322246 5.933068 Y21 | 2.652062 .960349 2.76 0.006 .769813 4.534312 Y20 | .2326752 .9933623 0.23 0.815 -1.714279 2.17963 Y19 | 1.658275 1.112127 1.49 0.136 -.5214548 3.838005 Y18 | 3.929181 1.22406 3.21 0.001 1.530067 6.328295 Y17 | 2.894581 1.349148 2.15 0.032 .2502986 5.538864 Y16 | 2.43456 1.446509 1.68 0.092 -.4005449 5.269666 Y15 | 1.233097 1.474467 0.84 0.403 -1.656805 4.122999 Y14 | 1.816727 1.519329 1.20 0.232 -1.161102 4.794557 Y13 | 2.215353 1.788924 1.24 0.216 -1.290873 5.721579 _cons | 6.133317 2.175926 2.82 0.005 1.868579 10.39805 -------------+---------------------------------------------------------------- sigma_u | 10.100888 sigma_e | 6.2508888 rho | .72308163 (fraction of variance due to u_i) ------------------------------------------------------------------------------ . estimates store random . hausman fixed random, sigmamore Note: the rank of the differenced variance matrix (11) does not equal the number of coefficients being tested (12); be sure this is what you expect, or there may be problems computing the test. Examine the output of your estimators for anything unexpected and possibly consider scaling your variables so that the coefficients are on a similar scale. ---- Coefficients ---- | (b) (B) (b-B) sqrt(diag(V_b-V_B)) | fixed random Difference Std. err. -------------+---------------------------------------------------------------- Soc | -.1950134 -.1443823 -.0506312 .0208556 Env | .0555025 .0937023 -.0381998 .0173834 Y23 | 2.487787 2.700422 -.2126351 .1737672 Y21 | 2.30636 2.652062 -.3457026 .0923275 Y20 | -.589413 .2326752 -.8220883 .1803371 Y19 | .3879365 1.658275 -1.270339 .2956484 Y18 | 2.255327 3.929181 -1.673854 .3989196 Y17 | 1.223604 2.894581 -1.670977 .39808 Y16 | .6098173 2.43456 -1.824743 .4572045 Y15 | -.7039242 1.233097 -1.937021 .4849187 Y14 | -.4803801 1.816727 -2.297108 .5782523 Y13 | -.2086145 2.215353 -2.423968 .6156221 ------------------------------------------------------------------------------ b = Consistent under H0 and Ha; obtained from xtreg. B = Inconsistent under Ha, efficient under H0; obtained from xtreg. Test of H0: Difference in coefficients not systematic chi2(11) = (b-B)'[(V_b-V_B)^(-1)](b-B) = 26.73 Prob > chi2 = 0.0050 (V_b-V_B is not positive definite)

Code:

. xtreg ROA Soc Env2 Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, fe Fixed-effects (within) regression Number of obs = 509 Group variable: company_ Number of groups = 86 R-squared: Obs per group: Within = 0.1215 min = 1 Between = 0.0002 avg = 5.9 Overall = 0.0126 max = 10 F(12,411) = 4.74 corr(u_i, Xb) = -0.1587 Prob > F = 0.0000 ------------------------------------------------------------------------------ ROA | Coefficient Std. err. t P>|t| [95% conf. interval] -------------+---------------------------------------------------------------- Soc | .0596746 .0257027 2.32 0.021 .0091494 .1101997 Env2 | -.0759133 .0201294 -3.77 0.000 -.1154827 -.0363439 Y23 | 1.714019 .9464357 1.81 0.071 -.1464398 3.574477 Y21 | .724673 .5702726 1.27 0.205 -.3963419 1.845688 Y20 | -1.065076 .5963272 -1.79 0.075 -2.237308 .1071562 Y19 | .9091219 .6810253 1.33 0.183 -.4296055 2.247849 Y18 | 2.49774 .747119 3.34 0.001 1.029089 3.966392 Y17 | 1.110816 .8117859 1.37 0.172 -.4849544 2.706586 Y16 | 1.461078 .8683571 1.68 0.093 -.245897 3.168054 Y15 | .1247239 .8978083 0.14 0.890 -1.640145 1.889593 Y14 | 1.0386 .9380706 1.11 0.269 -.8054144 2.882615 Y13 | 1.747345 1.109523 1.57 0.116 -.4337018 3.928392 _cons | 6.742863 1.587189 4.25 0.000 3.622843 9.862884 -------------+---------------------------------------------------------------- sigma_u | 8.5014054 sigma_e | 3.5629637 rho | .85059528 (fraction of variance due to u_i) ------------------------------------------------------------------------------ F test that all u_i=0: F(85, 411) = 14.08 Prob > F = 0.0000 . estimates store fixed . xtreg ROA Soc Env2 Y23 Y21 Y20 Y19 Y18 Y17 Y16 Y15 Y14 Y13, re Random-effects GLS regression Number of obs = 509 Group variable: company_ Number of groups = 86 R-squared: Obs per group: Within = 0.1193 min = 1 Between = 0.0068 avg = 5.9 Overall = 0.0230 max = 10 Wald chi2(12) = 55.41 corr(u_i, X) = 0 (assumed) Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ ROA | Coefficient Std. err. z P>|z| [95% conf. interval] -------------+---------------------------------------------------------------- Soc | .0626816 .0231504 2.71 0.007 .0173076 .1080556 Env2 | -.0600636 .0191302 -3.14 0.002 -.097558 -.0225692 Y23 | 1.771838 .9485326 1.87 0.062 -.0872514 3.630928 Y21 | .9011142 .5714768 1.58 0.115 -.2189598 2.021188 Y20 | -.8542544 .5940272 -1.44 0.150 -2.018526 .3100174 Y19 | 1.231007 .6671197 1.85 0.065 -.0765231 2.538538 Y18 | 2.931363 .7259445 4.04 0.000 1.508538 4.354188 Y17 | 1.562656 .7944344 1.97 0.049 .0055932 3.119719 Y16 | 1.880968 .8433974 2.23 0.026 .2279397 3.533997 Y15 | .6088329 .8679521 0.70 0.483 -1.092322 2.309988 Y14 | 1.57318 .8958133 1.76 0.079 -.1825821 3.328941 Y13 | 2.304711 1.069608 2.15 0.031 .2083169 4.401105 _cons | 4.824359 1.466826 3.29 0.001 1.949433 7.699286 -------------+---------------------------------------------------------------- sigma_u | 7.6336945 sigma_e | 3.5629637 rho | .82112064 (fraction of variance due to u_i) ------------------------------------------------------------------------------ . estimates store random . hausman fixed random, sigmamore Note: the rank of the differenced variance matrix (11) does not equal the number of coefficients being tested (12); be sure this is what you expect, or there may be problems computing the test. Examine the output of your estimators for anything unexpected and possibly consider scaling your variables so that the coefficients are on a similar scale. ---- Coefficients ---- | (b) (B) (b-B) sqrt(diag(V_b-V_B)) | fixed random Difference Std. err. -------------+---------------------------------------------------------------- Soc | .0596746 .0626816 -.003007 .011512 Env2 | -.0759133 -.0600636 -.0158497 .006636 Y23 | 1.714019 1.771838 -.0578196 .0815824 Y21 | .724673 .9011142 -.1764412 .049841 Y20 | -1.065076 -.8542544 -.2108213 .0834126 Y19 | .9091219 1.231007 -.3218855 .1557256 Y18 | 2.49774 2.931363 -.4336225 .194461 Y17 | 1.110816 1.562656 -.4518402 .1889185 Y16 | 1.461078 1.880968 -.4198898 .2273164 Y15 | .1247239 .6088329 -.484109 .2495674 Y14 | 1.0386 1.57318 -.5345792 .2965421 Y13 | 1.747345 2.304711 -.5573657 .3187259 ------------------------------------------------------------------------------ b = Consistent under H0 and Ha; obtained from xtreg. B = Inconsistent under Ha, efficient under H0; obtained from xtreg. Test of H0: Difference in coefficients not systematic chi2(11) = (b-B)'[(V_b-V_B)^(-1)](b-B) = 22.09 Prob > chi2 = 0.0237 (V_b-V_B is not positive definite)

I need to create a transitional matrix, but I've never done this before and I've been wasting time on it for a while.

I've had a lot of trouble with -xttrans-, because I always get a "no observations" message, which seems strange to me because when I inspect the data manually, I observe changes. Here are my steps:

- xtset idcontrato date_contract_start
- xttrans tariff_ekon_id_encod
- "no observations"

So, to give a bit of context. I have a database (which I'll make available below) which includes data on households and their contractual information about their energy tariffs, the various dates the contracts came into effect, etc. In particular, I'd like to look at how the contracts have changed over time.

In particular, I would like to observe the probability of household i changing from tariff a to tariff b when signing a new contract. I would also like to observe the probability of changing from a type of contract y to a type of contract z. It should be

Please find a dataex with the first 10 observations, and also a dataex in .dta format if necessary as an attachment. This enclosed example contains 1,000 observations.

Code:

* Example generated by -dataex-. For more info, type help dataex clear input long(id idcontrato sp_zipcode) double(date_contract_start date_contract_end) long(product_classification_encod tariff_ekon_id_encod) 1001 1001 9200 18887 21700 1 1 1001 451697 9200 21701 22431 1 2 1001 1236132 9200 22432 22645 1 4 1001 1730454 9200 22646 22676 1 4 1001 2082075 9200 22677 22735 1 4 1001 2172904 9200 22736 23010 1 4 1001 2872183 9200 23011 23069 1 4 1001 3107888 9200 23070 . 4 4 1005 1005 48600 18800 21639 1 1 1005 420392 48600 21640 21651 1 1 end format %td date_contract_start format %td date_contract_end label values product_classification_encod product_classification_encod label def product_classification_encod 1 "Clasico", modify label def product_classification_encod 4 "Tarifa Justa", modify label values tariff_ekon_id_encod tariff_ekon_id_encod label def tariff_ekon_id_encod 1 "20A", modify label def tariff_ekon_id_encod 2 "20DHA", modify label def tariff_ekon_id_encod 4 "20TD", modify

Here a small description about variables:

Variables |
Description |

id | Household ID |

idcontrato | Contract ID (those should be unique) |

sp_zipcode | Zip Code of a given household |

date_contract_start | Starting Date of a given contract |

date_contract_end | Ending Date of a given contract |

product_classification_encod | Represents the Contract Type. It contains 4 types. |

tariff_ekon_id_encod | Represents the Tariff Type. In my dataex, it should have 6 types. |

Thank you so much in advance!

Best,

Michael

]]>

!- how to create quarterly date. Although I have created quarterly date as you see in the sample of may dataset below. but an not sure whether its correct or not.

So, I need the exacte code for this task.

HTML Code:

input long code str10 endingdateofstatistics double sales float(sl wanted) double interestexpenses 1 "2002-03-31" 595998313 2118441344 5143 494251206 1 "2002-06-30" 1315435885 595998336 5173 1057705003 1 "2002-09-30" 2120786642 1315435904 5204 1763105625 1 "2002-12-31" 3077098847 2120786688 5235 2346094441 1 "2003-03-31" 649968516 3077098752 5265 671811790 1 "2003-06-30" 1428220006 649968512 5295 1342495241 1 "2003-09-30" 2734822115 1428220032 5326 1320285705 1 "2003-12-31" 3128836264 2734822144 5356 2730921859 1 "2004-03-31" 1043003427 3128836352 5387 836718769 1 "2004-06-30" 2181888038 1043003456 5417 1829149395 1 "2004-09-30" 3243275024 2181888000 5448 2731005816 1 "2004-12-31" 4480810798 3243275008 5478 3683960935 1 "2005-03-31" 1087591314 4480811008 5508 848176037 1 "2005-06-30" 2258465172 1087591296 5539 1856542363 1 "2005-09-30" 3431091962 2258465280 5569 2776622010 1 "2005-12-31" 4557763312 3431091968 5600 3746666810 1 "2006-03-31" 1262157098 4557763072 5630 1121451048 1 "2006-06-30" 3020095556 1262157056 5660 2376141819 1 "2006-09-30" 4878583091 3020095488 5691 3732502499 1 "2006-12-31" 6788985745 4878583296 5722 5068772930 1 "2007-03-31" 2244966243 6788985856 5752 1438988613 1 "2007-06-30" 4711940000 2244966144 5782 3187131000 1 "2007-09-30" 7363680000 4711940096 5813 5491449000 1 "2007-12-31" 10291531000 7363680256 5843 8438051000 1 "2008-03-31" 3366156000 10291530752 5874 3288918000 1 "2008-06-30" 6797707000 3366156032 5904 6892908000 1 "2008-09-30" 10240882000 6797706752 5935 10440326000 1 "2008-12-31" 13563220000 10240881664 5965 13867376000 1 "2009-03-31" 3523844000 13563219968 5995 2573409000 1 "2009-06-30" 6902083000 3523844096 6026 4807758000 1 "2009-09-30" 10434314000 6902083072 6056 6859029000 1 "2009-12-31" 14293863000 10434314240 6087 9001138000 1 "2010-03-31" 3956690000 14293863424 6117 2324413000 1 "2010-06-30" 8232540000 3956689920 6147 4770383000 1 "2010-09-30" 12714440000 8232540160 6178 7352523000 1 "2010-12-31" 17561832000 12714439680 6209 10422598000 1 "2011-03-31" 5617613000 17561831424 6239 4115507000 1 "2011-06-30" 11644324000 5617612800 6269 9148239000 1 "2011-09-30" 20204827000 11644323840 6300 17318474000 1 "2011-12-31" 29087326000 20204826624 6330 27040923000 1 "2012-03-31" 9434413000 29087326208 6361 10411417000 1 "2012-06-30" 19001393000 9434413056 6391 21070247000 1 "2012-09-30" 28794838000 19001393152 6422 31614058000 1 "2012-12-31" 38911398000 28794836992 6452 41578323000 1 "2013-03-31" 1.0526e+10 38911397888 6482 1.1539e+10 1 "2013-06-30" 2.2969e+10 1.0526e+10 6513 2.4698e+10 1 "2013-09-30" 3.6564e+10 2.2969e+10 6543 3.8431e+10 1 "2013-12-31" 5.1294e+10 3.6564e+10 6574 5.2414e+10 1 "2014-03-31" 1.480e+10 5.1294e+10 6604 1.6354e+10 1 "2014-06-30" 3.2412e+10 1.48e+10 6634 3.2396e+10 1 "2014-09-30" 5.1372e+10 3.2412e+10 6665 4.9152e+10 1 "2014-12-31" 7.0637e+10 5.1372e+10 6696 6.6156e+10 1 "2015-03-31" 2.0563e+10 7.0637e+10 6726 1.732e+10 1 "2015-06-30" 4.4901e+10 2.0563e+10 6756 3.4746e+10 1 "2015-09-30" 6.8545e+10 4.4901e+10 6787 5.1019e+10 1 "2015-12-31" 9.2705e+10 6.8545e+10 6817 6.555e+10 1 "2016-03-31" 2.657e+10 9.2705e+10 6848 1.3961e+10 1 "2016-06-30" 5.2718e+10 2.657e+10 6878 2.7372e+10 1 "2016-09-30" 7.8824e+10 5.2718e+10 6909 4.1012e+10 1 "2016-12-31" 1.04416e+11 7.8824e+10 6939 5.4708e+10 1 "2017-03-31" 2.7081e+10 1.04416e+11 6969 1.6176e+10 1 "2017-06-30" 5.3204e+10 2.7081e+10 7000 3.4287e+10 1 "2017-09-30" 7.8773e+10 5.3204e+10 7030 5.3393e+10 1 "2017-12-31" 1.04869e+11 7.8773e+10 7061 7.4059e+10 1 "2018-03-31" 2.7357e+10 1.04869e+11 7091 2.2257e+10 1 "2018-06-30" 5.5484e+10 2.7357e+10 7121 4.4572e+10 1 "2018-09-30" 7.8378e+10 5.5484e+10 7152 6.7132e+10 1 "2018-12-31" 1.06212e+11 7.8378e+10 7183 8.8143e+10 1 "2019-03-31" 3.0351e+10 1.06212e+11 7213 2.1887e+10 1 "2019-06-30" 6.2069e+10 3.0351e+10 7243 4.3472e+10 1 "2019-09-30" 9.4166e+10 6.2069e+10 7274 6.5544e+10 1 "2019-12-31" 1.26814e+11 9.4166e+10 7304 8.7588e+10 1 "2020-03-31" 3.4483e+10 1.26814e+11 7335 2.3107e+10 1 "2020-06-30" 7.0071e+10 3.4483e+10 7365 4.4681e+10 1 "2020-09-30" 1.07455e+11 7.0071e+10 7396 6.5627e+10 1 "2020-12-31" 1.43242e+11 1.07455e+11 7426 8.7537e+10 1 "2021-03-31" 3.8271e+10 1.43242e+11 7456 2.2308e+10 1 "2021-06-30" 7.6835e+10 3.8271e+10 7487 4.5471e+10 1 "2021-09-30" 1.15108e+11 7.6835e+10 7517 6.9126e+10 1 "2021-12-31" 1.53503e+11 1.15108e+11 7548 9.320e+10 1 "2022-03-31" 4.0852e+10 1.53503e+11 7578 2.4389e+10 1 "2022-06-30" 8.019e+10 4.0852e+10 7608 4.873e+10 1 "2022-09-30" 1.19851e+11 8.019e+10 7639 7.3142e+10 1 "2022-12-31" 1.60469e+11 1.19851e+11 7670 9.8748e+10

Can any one give the code for this task. for example I need to find the mean of sales in this data based on quarters.

3- after i get the average sales I need to keep only one observation for per firm. please I need the code . but each firm should has the average sales I calculate in the step 2 above ]]>

sum var1 if timepoint == 1

sum var2 if timepoint == 1

sum var3 if timepoint == 1

sum var3 if timepoint == 1

Then I need to estimate a new variable (mean and SD) of the sum of each items mean

(mean var1 + mean var2 + meanvar3 + meanvar4)/4

What would be the command for this second part?

Thanks!]]>