The manual for the

. For example, %9.2f specifies the f format that is __nine characters wide__ and has two digits following the decimal point.

Compare for example the following presentation in the output window and the browser of the same dataset:

Array

Here all variables are formatted with format %20.0g which according to the manual should provide the capacity for 20 characters. However, only the content in the data browser window seems to be formatted consistently with the manual, while the content in the output window does not obey the same rule and formats the values according to the byte width (Cyrillic letters occupy 2 bytes in the UTF-8 Unicode character encoding).

If the format width is doubled, it fits the text in the output window nicely, but results in an unnecessary wide spacing in the browser window. (and the situation will be worse for languages utilizing 3 and 4 byte Unicode characters):

Array

- I wonder, what are the recommendations for an external program saving a dataset in Stata's .dta format? Should it apply the byte-widest or character-widest format width?
- In practice, what do users prefer most commonly:
**browse**or**list**? - Is there any "fit" format - one that will expand the column precisely enough to fit the widest label/value for the purpose of list/browse commands? (I suppose no, based on the description of the existing formats).

]]>

I have a database with 30 observations (patients), each of them evaluated with various (about 10) clinical evaluation scale, repeated at three time points (before, after treatment and after a period of follow up - first and second time point are fully filled, at third there are some missing values).

I divided patients into three different groups and i want to test if there are differences within and between three groups at second and third time point.

Searching into stata manual and on statalist i found many information about using xtmixed with one dependent variables and interactions between independent ones and time-points but..

if i have more than one dependent (in my database, multiple scores of evaluation scale - continuous and not -) i must run a mixed model for each one or is there a command to perform a single analysis?

Thank you

Emanuele]]>

I am new by Statalist and apologize if my question has been already answered here. I checked up some posts but they were not really answer on my question.

I want to investigate the effect of labor policy on capital structure in 15 EU countries. I would like to apply Difference-in-Difference model as it is common by identifying the effect of state policy. However, my treatment variable is a contineous variable, namely it is an index (on the country level) that can take any value between 0 and 6 (not only integer values but also e.g. 2.75, 0.37 etc.). This index measure the strongness of labor regulation and it changes in the respective country according to changes in labor law in this country. The standard DID with binary treatment-variable and year-dummy for pre- and post intervention are not really applicable here as I undestand it. But after studying some papers I found a very similar setting as I have and they say they employ a DID research design. They estimate the effect of World War II on female labor supply in the US and describe their model as follows:

y are weeks worked by female i, in state s, in year t. They have two periods, 1940 and 1950 where d1950 is a dummy for the latter year, X is a vector of individual characteristics, δs are state dummies, and ms is the mobilization rate of men in each state (proxy for WWII effect). Their interaction estimates whether states with higher mobilization rates during WWII saw a stronger rise in females' weeks worked from 1940 to 1950. This is given by the coefficient φ.

What I do not really undestand why one have to take the latest year from the sample period and does it mean that it is a dummy that takes "1" for the year 1950 here? It seems to me more plausible to take the sample`s beginning date.

The next question: Can I just use xtreg command: xtreg outcome controls_variables i.Y2015_dummy##c.Index, fe vce(r)

And why is it DID analysis or better what does it mean "the generalized DID strategy"?

Sorry for such a long post, I just wanted to be clear and I would be thankful for any help!

Best regards

Marina

]]>

I am currently working on an assignment were I want to match a group of companies doing an IPO to a control group by industry and market cap.

For the industry I need to match on SIC codes, the problem now is that not every treated company has a control company with the same 3-digit SIC code.

I am trying to make an algorithm (a loop) which will first look at the 3-digit SIC code, but when there is no match, it should look at the 2-digit SIC code. And when

there is no match on the 2-digit SIC code, looks on the 1-digit SIC code. But i haven't found the right algorithm at this moment.

I know that I need to merge the two groups of companies and use:

merge 1:M 3SICCODE using control group

keep if _merge ==3

But before or after this there needs to be a line where it will search for another match of a 2-digit SIC code or even a 1-digit SIC code. Because it could also be possible that

the _merge is 1(treated group) or 2(control group), so there is no match.

Maybe someone of you can help me?

Many thanks

]]>

]]>

Best

Daniel]]>

foreach x in mpg headroom trunk {

scatter price `x', title(proper("price and `x'"))

} The word "proper" appears literally in the title. Is there a way that works?]]>

but that standardizes them using the mean and SD of all years together. I would prefer to standardize them using the mean and SD of the first year, 2005. What's the simplest way to do that.

Bonus round: my data are multiply imputed. Is there still a simple alternative to this?

mi passive: egen zscore = std(score)

]]>

I need clarification on the Markov chain Monte Carlo for panel quantile regression. What are noisy draws? burns? arate?

I used the MCMC option since the other option Nelder-Mead numerical optimization did not give me detailed results. Only coefficients were generated using the Nelder-Mead numerical optimization option. The standard error, pvalues, and confidence intervals were not generated.

I used command : qregpd LogEmissions LogIndu LogGDP LogGDP2 Logmanu LogTrade Apopn, id(Country) fix(Year) quantile(0.7)

Alternatively, I then used the MCMC option as below:

qregpd LogEmissions LogIndu LogGDP LogGDP2 LogManu LogTrade LogAnnPopn, id(Country) fix(Year) optimize(mcmc) noisy draws(1000) burn(500) arate(.5) quantile(0.7)

My results were perfectly generated using the MCMC option but not Nelder-Mead numerical optimization.

Can somebody please explain the differences??

]]>

Company | Year | Sales | Industry |

ABC | 1998 | 10 | 1 |

BCD | 1998 | 14 | 1 |

CDE | 1998 | 12 | 1 |

DEF | 1998 | 67 | 2 |

EFG | 1998 | 55 | 2 |

FGH | 1998 | 60 | 2 |

I am attempting to use a dynamic list of globals as an input into sureg. In particular, in the program setup I am requiring a user to declare both the number of equations and globals representing the equations:

I later use a forvalues loop to build the input list for sureg:

Code:set more off //Load data here sysuse auto, clear*Specify model equations *Set the number of equations after the equal sign scalar numEquations = 3 *Each equation should have a global declaration of the form : *global eq1 (Y X1 X2 ...Xk) *Example global eq1 (price foreign weight length) global eq1 (price foreign weight length) global eq2 (mpg foreign weight) global eq3 (displ foreign weight)

Code:

*List which equations to include in the regression forvalues i=1(1)`=numEquations' { local tmp "eq`i'"local eqList `eqList' char(36)+`tmp'+char(32)}di "`eqList'"

However, when I call sureg:

I am getting an error, "coding operators not allowed". Any assistance on this issue would be much appreciated.

Code:sureg `eqList', const(`constList')

Thanks,

Erica

The full code is below:

Code:

set more off //Load data here sysuse auto, clear //////////////////////////////////////////////////////////// /*Step One: Regression setup */ ////////////////////////////////////////////////////////// *Specify model equations *Set the number of equations after the equal sign scalar numEquations = 3 *Each equation should have a global declaration of the form : *global eq1 (Y X1 X2 ...Xk) *Example global eq1 (price foreign weight length) global eq1 (price foreign weight length) global eq2 (mpg foreign weight) global eq3 (displ foreign weight) //////////////////////////////////////////////////////////// /*Step Two: Run Regression */ ////////////////////////////////////////////////////////// *List which equations to include in the regression forvalues i=1(1)`=numEquations' { local tmp "eq`i'" local eqList `eqList' char(36)+`tmp'+char(32) } di "`eqList'" sureg `eqList'

Code:

program define myeval args lnfj Xb ologit $ML_y1 `Xb' tempvar lnfj_part1 predict `lnfj_part1', pr ... replace `lnfj' = log(`lnfj_part1')+... end

Code:

variable _MLtua1 not found st_data(): 3500 invalid Stata variable name mopt__st_user_lf1(): - function returned error mopt__calluser_lf2(): - function returned error opt__eval_nr_lf1(): - function returned error opt__eval(): - function returned error _optimize_evaluate(): - function returned error _mopt__evaluate(): - function returned error _moptimize_evaluate(): - function returned error _moptimize_search(): - function returned error Mopt_search(): - function returned error <istmt>: - function returned error r(3500);

Example:

Code:

. webuse hypoxia (Hypoxia study) . stset dftime, failure(failtype==1) failure event: failtype == 1 obs. time interval: (0, dftime] exit on or before: failure ------------------------------------------------------------------------------ 109 total observations 0 exclusions ------------------------------------------------------------------------------ 109 observations remaining, representing 33 failures in single-record/single-failure data 353.129 total analysis time at risk and under observation at risk from t = 0 earliest observed entry t = 0 last observed exit t = 8.454 . stcomlist, compet1(2) at(1 5) by(pelvicln) failure: failtype == 1 competing failures: failtype == 2 Time CIF SE [95% Conf. Int.] -------------------------------------------------- pelvicln=E 1 0.1304 0.0702 0.0327 0.2972 5 0.2609 0.0916 0.1062 0.4469 pelvicln=N 1 0.1256 0.0415 0.0587 0.2191 5 0.2217 0.0523 0.1290 0.3302 pelvicln=Y 1 0.4091 0.1048 0.2085 0.6007 5 0.5568 0.1079 0.3261 0.7364 failure: failtype == 2 competing failures: failtype == 1 Time CIF SE [95% Conf. Int.] -------------------------------------------------- pelvicln=E 1 0.0435 0.0425 0.0031 0.1824 5 0.2174 0.0860 0.0791 0.3993 pelvicln=N 1 0.0312 0.0217 0.0059 0.0965 5 0.1582 0.0496 0.0763 0.2667 pelvicln=Y 1 0.1364 0.0732 0.0341 0.3087 5 0.1364 0.0732 0.0341 0.3087

Code:

ssc install stcomlist

Example:

Code:

. sysuse nlsw88 (NLSW, 1988 extract) . xmiss race union union ----------------------------- race Missing Total % missing ----------------------------------------------------- white 284 1637 17.3 black 82 583 14.1 other 2 26 7.7 ----------------------------------------------------- All 368 2246 16.4

Code:

ssc install xmiss