I ran a meta-analyses and I found a statistical significant heterogeneity on my data. After that, does anyone Know a command for running sensitivity analyses? For randomized clinical trials, cause I have found the episens command but it'w not fit in my data.

Thank you!]]>

Best Regards Muhammad Sarmad Latif]]>

I thought I would do it by saving a value from the data set in a macro variable and use the macro variable to substitute into the xline graph option.

The code below fails to add a line when I use the macro variable.

Is there something else I should be doing to get this to work?

clear

set obs 12

gen y = 42

gen time = _n/y*100

line y time, xline(7)

global time1 7

macro list

line y time, xline(`time1')

Thanks,

Kim

]]>

date | 1 | 2 | 3 | 4 |

8/21/2013 | 88.8 | 73.55 | 44.5975 | 79.38 |

8/22/2013 | 88.26 | 73.46 | 44.73 | 79.77 |

8/23/2013 | 88.41 | 73.44 | 44.775 | 80.01 |

8/26/2013 | 87.53 | 73.03 | 43.75 | 78.54 |

8/27/2013 | 86.17 | 72.86 | 43.5425 | 77.97 |

8/28/2013 | 86.53 | 72.38 | 43.8025 | 76.85 |

8/29/2013 | 86.57 | 72.43 | 43.8425 | 77.31 |

8/30/2013 | 86.41 | 72.98 | 43.605 | 77.89 |

9/2/2013 | 86.41 | 72.98 | 43.605 | 77.89 |

9/3/2013 | 86.42 | 72.68 | 44.255 | 77.75 |

9/4/2013 | 86.9 | 72.91 | 44.0475 | 77.49 |

9/5/2013 | 87.04 | 72.67 | 44.0525 | 77.14 |

9/6/2013 | 87.16 | 72.59 | 44.1675 | 77.15 |

9/9/2013 | 87.56 | 73.51 | 44.6375 | 78.16 |

9/10/2013 | 88.53 | 73.96 | 46.1475 | 77.95 |

9/11/2013 | 89.23 | 74.05 | 46.5775 | 78.27 |

9/12/2013 | 89.01 | 73.91 | 46.265 | 78.26 |

9/13/2013 | 88.57 | 74.36 | 47.25 | 79.05 |

9/16/2013 | 89.03 | 74.78 | 47.345 | 80.16 |

9/17/2013 | 89.06 | 75.15 | 47.9 | 79.83 |

9/18/2013 | 89.91 | 76.42 | 48.4125 | 80.295 |

9/19/2013 | 90.07 | 76.21 | 48.6775 | 80.12 |

9/20/2013 | 89.68 | 75.83 | 49.7075 | 79.39 |

9/23/2013 | 89.09 | 76.42 | 49.06 | 79.28 |

9/24/2013 | 88.22 | 75.75 | 48.335 | 78.62 |

9/25/2013 | 87.08 | 74.65 | 47.89 | 77.72 |

9/26/2013 | 87.07 | 74.62 | 48.39 | 78.05 |

9/27/2013 | 86.73 | 74.36 | 48.2625 | 77.21 |

9/30/2013 | 86.69 | 73.96 | 47.775 | 75.59 |

10/1/2013 | 87.47 | 73.59 | 48.305 | 76.16 |

10/2/2013 | 87.29 | 73.72 | 47.955 | 75.93 |

10/3/2013 | 86.58 | 73.16 | 47.1625 | 75.84 |

10/4/2013 | 87.31 | 72.8 | 47.62 | 76.02 |

10/7/2013 | 86.59 | 71.87 | 46.5825 | 75.65 |

10/8/2013 | 85.61 | 72.9 | 45.6325 | 76.33 |

10/9/2013 | 85.96 | 73 | 45.9625 | 76.95 |

10/10/2013 | 87.78 | 74.79 | 47.2575 | 77.89 |

10/11/2013 | 89.45 | 74.82 | 48.05 | 78.48 |

10/14/2013 | 89.8 | 74.68 | 48.36 | 78.74 |

10/15/2013 | 89.93 | 74.37 | 47.8425 | 77.6 |

10/16/2013 | 91.11 | 75.6 | 48.9075 | 78.34 |

10/17/2013 | 91.97 | 75.78 | 49.57 | 79.42 |

10/18/2013 | 91.63 | 75.71 | 50.1125 | 79.41 |

10/21/2013 | 91.2 | 75.15 | 50.01 | 78.97 |

10/22/2013 | 92.36 | 76.32 | 49.995 | 80.38 |

10/23/2013 | 92.1 | 75.9 | 49.7225 | 80.91 |

10/24/2013 | 92.35 | 76.42 | 50.7275 | 80.61 |

10/25/2013 | 92.09 | 76.08 | 50.765 | 80 |

10/28/2013 | 92.39 | 77.14 | 50.77 | 81.3 |

10/29/2013 | 93.14 | 77.06 | 51.06 | 82.46 |

I'd like to work with RDD geographic, but i don't find nothing in stata' help about the programation. There are some do file and data base available for replication? Someboby would help me, please? Thanks.]]>

I am trying to estimated the effect of positive and negative changes in income on GHQ (well-being) with random slopes. After estimation i then want to extract the individual coefficients using reeffects.

I have a large longitudinal dataset

Code:

mixed GHQ c.income c.lagged_income gain loss /// c.mean_income c.mean_lagged_income mean_gain mean_loss /// || pid: || pid: gain no_change loss, nocons cov(id) vce(cluster pid)

c.income is income this year

c.lagged_income is income last year

gain is a dummy variable if the individual had a positive change in income of above 5%

loss is a dummy variable if the individual had a negative change in income of above 5%

the reference category is no_change.

I have include Mundlak type (individual means of the time-varying variables) which are shown with the mean_ prefix.

I have include intercepts for each individual and random slopes for gain and loss.

Is this correct? It is not obvious to me what covariance structure to use. Unstructured seems the most appropriate but cov(id) is default for categorical variables

Thanks]]>

I have about one hundred patients (id) who have visited mental health professionals from 1-20 times (visit). I'm interested when, in the series of visits with these professionals, activities of daily living such as use of screens is first discussed or recommended. Using sts graph, I plotted the cumulative incidence curves for this against various factors such as gender, age group, diagnosis and provider type.where the event is the first time screen time is discussed or recommended (image below dataex output due to its large size; can it be resized?). I ran stset then sts graph to generate the graph below (actual code follows data listed below).

Can I use survival methods including the log-rank test to compare these lines even though I don’t know the actual visit times but just their temporal order? Times between visits could be anywhere from a couple of weeks to a couple of months but they are not of interest and were not abstracted.

If the log-rank test assumptions are violated and can’t be used to compare these lines, how can I compare them?

Thanks for your help with this. Please ignore the images that follow the data; I wasn't able to delete them.

Regards, John LeBlanc

Dalhousie University Array

Code:

* 132 or original 595 obs kept. clear input int id byte(visit gender) float(agegrp scn) 154 1 0 1 0 154 2 0 1 0 154 3 0 1 0 154 4 0 1 0 154 5 0 1 0 154 6 0 1 0 154 7 0 1 0 154 8 0 1 0 154 9 0 1 0 154 10 0 1 0 154 11 0 1 0 154 12 0 1 0 154 13 0 1 0 154 14 0 1 0 154 15 0 1 0 154 16 0 1 0 154 17 0 1 0 154 18 0 1 2 154 19 0 1 0 154 20 0 1 0 154 21 0 1 0 154 22 0 1 0 154 23 0 1 0 154 24 0 1 0 154 25 0 1 0 154 26 0 1 0 155 1 0 1 1 155 2 0 1 1 155 3 0 1 0 155 4 0 1 0 155 5 0 1 0 155 6 0 1 0 157 1 1 0 0 158 1 1 0 0 158 2 1 0 0 158 3 1 0 0 158 4 1 0 0 159 1 1 0 1 159 2 1 0 0 159 3 1 0 0 159 4 1 0 0 159 5 1 0 0 159 6 1 0 0 159 7 1 0 0 159 8 1 0 0 159 9 1 0 0 160 1 1 0 0 160 2 1 0 1 161 1 1 0 0 161 2 1 0 0 161 3 1 0 0 161 4 1 0 0 161 5 1 0 0 161 6 1 0 0 161 7 1 0 0 161 8 1 0 0 161 9 1 0 0 161 10 1 0 0 161 11 1 0 0 161 12 1 0 0 161 13 1 0 0 161 14 1 0 0 161 15 1 0 0 161 16 1 0 0 161 17 1 0 0 161 18 1 0 0 162 1 1 0 0 164 1 1 0 0 164 2 1 0 0 165 1 1 0 1 165 2 1 0 0 165 3 1 0 0 165 4 1 0 0 165 5 1 0 0 165 6 1 0 0 165 7 1 0 0 165 8 1 0 0 165 9 1 0 0 165 10 1 0 0 165 11 1 0 0 165 12 1 0 0 165 13 1 0 0 166 1 1 0 2 166 2 1 0 0 166 3 1 0 0 166 4 1 0 0 166 5 1 0 0 166 6 1 0 0 168 1 1 0 0 168 2 1 0 0 169 1 1 0 2 169 2 1 0 0 170 1 1 0 1 170 2 1 0 0 170 3 1 0 0 170 4 1 0 0 172 1 1 0 1 172 2 1 0 0 172 3 1 0 0 172 4 1 0 0 172 5 1 0 0 172 6 1 0 0 172 7 1 0 0 172 8 1 0 0 172 9 1 0 0 172 10 1 0 0 172 11 1 0 0 172 12 1 0 0 172 13 1 0 0 172 14 1 0 0 172 15 1 0 0 172 16 1 0 0 172 17 1 0 0 172 18 1 0 0 172 19 1 0 0 172 20 1 0 0 172 21 1 0 0 173 1 1 0 1 174 1 1 0 0 175 1 1 1 1 176 1 1 1 1 176 2 1 1 0 176 3 1 1 0 177 1 1 1 0 177 2 1 1 0 177 3 1 1 0 177 4 1 1 0 177 5 1 1 0 177 6 1 1 0 177 7 1 1 0 177 8 1 1 0 177 9 1 1 0 end label values gender gender label def gender 0 "Male", modify label def gender 1 "Female", modify label values agegrp agegrp label def agegrp 0 "Age 4-12", modify label def agegrp 1 "Age 13-16", modify label values scn disrec label def disrec 0 "Not raised", modify label def disrec 1 "Discussed", modify label def disrec 2 "Recommended", modify stset visit, failure(scn==1 2) scale(1) id(id) sts graph, failure by(gender) ytitle(% of patients) xtitle(Visit #) xscale(range(0 26)) title(Cumulative incidence of visit when screen use first discussed or recommended, size(medsmall))

I wanted to ask you a question with regards stata coding. Im working on a big data set now wich contains different sets of variables(like: id date category country etc ). the problem is that i

want to do a difference-in-difference estimator. which means im looking to create a smaller sample from the big ones including only observations that are identical to each other in terms: id, date

and category but in different countries. and im looking to deal only with these types of data, not the whole . i have a list of 8 countires which i already grouped in a dummy variables : east west and central. the goal is to see how the countries (only west and east) that were effected form a policy differ with regards to response in comparison to the central countries that were not part of the policy.

the central countires contains hunderds or even thousends of different id , but im only interested to group only those id that are identical accros different countries(only keep those observationos with central countries id that are also part of western and eastern countires), in the same date and the same category. I cant do this manually since the data set is too big and Ive tried writing them in a loop many times but always faild. I would appriciate a lot if you could give me an idea about this.

Thanks in advance

Rebani]]>

I would like to know how I can calculate the LD50, LD90 and LD 95 (lethal dose) for bioassays using logit regression?

I found probit regression commands, however I would like to apply the formula to STATA programming, please could someone help me?

1) For LD50 in probit and logit regression the formula is the same:

local LD50 LD50: exp (-_ b [_cons] / _ b [logdose])

2) However, for the other LD there is a change:

I found the links described at the end for the probit model as follows:

Probit: LDp = (invnormal (p) -constant) / coef

STATA command:

local ld90 LD90: exp ((invnorm (0.9) -_ b [_cons]) / _ b [logdose])

However, for the logit model the formula recommends is:

Logit: LDp = (log (p / (1-p)) - constant) / coef

How to program the above command in stata for Logit regression?

Another question is about normality: If the distribution is not normal, should I calculate LD from the Logit regression? I have read in some places that if the distribution is not parametric, logit regression should be applied for LD.

Would you have another model suggestion for calculating LD with non-normal distribution?

Thanks

LINKS:

https://www.stata.com/statalist/arch.../msg00184.html

https://www.statalist.org/forums/for...r-logit-probit

https://stats.idre.ucla.edu/other/mu...git-or-probit/]]>

I am using Stata/SE 15.1 for Mac. I am using a global to specify my path at the beginning of my do file. I write it like this:

Code:

global path "/Users/felix/Dropbox/Felix/F&Vs PHD/EINITE/Aggregation/Paper/Aggregation_21082019/"

Code:

global path "/Users/felix /Dropbox /Felix/F&Vs PHD / EINITE/ Aggregation/Paper/Aggregation_21082019"

I specify my command like this:

Code:

import excel "$path/TOTAL WITH PROPERTYLESS/Germany_total_1400.xlsx", firstrow

Code:

file /Users/felix /Dropbox /Felix/F&Vs PHD / EINITE/ Aggregation/Paper/Aggregation_21082019/TOTAL WITH PROPERTYLESS/Germany_total_1400.xlsx

Many thanks in advance for your help.

]]>

Code:

mixed age || state:

and with \[\epsilon_{i,t+1} = \rho \epsilon_{i,t} + \eta_{i,t}\] (where η

This should be simple but when we generate the data with some given parameters, and we analyse the simulated data, we do not get these parameters back with xtregar, fe ! (more exactly, results are significantly different from the initial parameters) What could go wrong ?

Here are the built data and code. Thanks in advance for any hint!

Code:

clear all *to get easily a panel structure : nr year set obs 100 gen nr=_n expand 19 gen year = 2000 sort nr bysort nr: replace year=year[_n-1]+1 if _n!=1 sort nr year xtset nr year save initial0.dta, replace *We consider some arsbitrary parameters: use initial0.dta, clear scalar the_rho = 0.3 scalar the_sigma_epsilon = 2 scalar the_sigma_eta = the_sigma_epsilon * sqrt(1-the_rho*the_rho) scalar the_sigma_c_i = 0.9 matrix the_m = (0,0,0) matrix the_sd = (the_sigma_eta,the_sigma_epsilon,the_sigma_c_i) * We try to simulate 100 times the corresponding process set seed 89 gen rho_emp = 0 gen sigma_e_emp = 0 gen sigma_std_ci = 0 forvalues i = 1/100 { drawnorm eta epsilon0 c_i, means(the_m) sds(the_sd) bysort nr: gen epsilon= epsilon0 if _n==1 bysort nr: replace epsilon=eta + the_rho * epsilon[_n-1] if _n>1 bysort nr: replace c_i = c_i[1] gen y = 5 + c_i + epsilon xtregar y, fe replace rho_emp = e(rho_ar) if _n==`i' replace sigma_e_emp = e(sigma_e) if _n==`i' replace sigma_std_ci = e(sigma_u) if _n==`i' drop epsilon epsilon epsilon0 y eta c_i } //comparing the results keep if _n<=100 keep rho_emp sigma_e_emp sigma_std_ci gen constant=1 reg rho_emp constant test (_cons=0.3) reg sigma_e_emp constant test (_cons=2) reg sigma_std_ci constant test (_cons=0.9)

Code:

scalar the_sigma_eta = the_sigma_epsilon * sqrt(1-the_rho*the_rho)

The reference for xtregar, fe can be found here: https://www.stata.com/manuals13/xtxtregar.pdf

]]>