So I estimate the beta's using GMM in MATA.

: void GMM_DL(todo,betas,crit,g,H)

{

PHI=st_data(.,("phi"))

PHI_LAG=st_data(.,("phi_lag"))

Z=st_data(.,(" lagloglab laglogmat logcapital "))

X=st_data(.,(" logcapital logmaterials loglabor "))

X=st_data(.,(" logcapital logmaterials loglabor "))

X_lag=st_data(.,(" lagloglab laglogmat logcapital "))

Y=st_data(.,(" logdeflatedrevenue "))

QR_lag=st_data(.,(" logprevioustariff "))

C=st_data(.,("const"))

OMEGA=PHI-X*betas'

OMEGA_lag=PHI_LAG-X_lag*betas'

OMEGA_lag_pol=(C,OMEGA_lag,QR_lag)

g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol' OMEGA

XI=OMEGA-OMEGA_lag_pol*g_b

crit=(Z'XI)'(Z'XI)

}

: void DL()

{

S=optimize_init()

optimize_init_evaluator(S, &GMM_DL())

optimize_init_evaluatortype(S,"d0")

optimize_init_technique(S, "nm")

optimize_init_nmsimplexdeltas(S, 0.1)

optimize_init_which(S,"min")

optimize_init_params(S,(2,0.8,0.2))

p=optimize(S)

p

st_matrix("beta_NP",p)

}

I save the matrix and get beta's.

Thanks!]]>

I am trying to run the following model

xi: xtivreg sba ids fco vvm nan avi i.year (tnom vda = sud m_vvd m_mmd), be first

I would like to test if tnom vda are really endogenous or exogenous.

I know I can Collapse the data to a cross section and use estat endogenous but my supervisor would like me to keep the years controlled for.

I would appreciate if someone let me know the command.

Thank you very much

Mona]]>

I want to estimate productivity and the Beta's

I am running the GMM in MATA:

My program file is as follows:

Step 1:

Simple OLS to get the initial Beta values.

reg logdeflatedrevenue logcapital logmaterials loglabor

gen bols_l=_b[ loglabor ]

gen bols_k=_b[ logcapital ]

gen bols_m=_b[ logmaterials ]

reg logdeflatedrevenue logcapital logmaterials loglabor laglogcap lagloglab laglogmat

predict phi

gen phi_lag=L.phi

Step 2:

. mata

------------------------------------------------- mata (type end to exit) -----------------------------------------------

: void GMM_DL(todo,betas,crit,g,H)

> {

> PHI=st_data(.,("phi"))

> PHI_LAG=st_data(.,("phi_lag"))

> Z=st_data(.,(" laglogcap lagloglab laglogmat "))

> X=st_data(.,(" logcapital logmaterials loglabor "))

> X_lag=st_data(.,(" laglogcap lagloglab laglogmat" ))

> Y=st_data(.,(" logdeflatedrevenue "))

> QR_lag=st_data(.,(" logprevioustariff "))

> C=st_data(.,("const"))

> OMEGA=PHI-X*betas'

> OMEGA_lag=PHI_LAG-X_lag*betas'

> OMEGA_lag_pol=(C,OMEGA_lag,QR_lag)

> g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol' OMEGA

> XI=OMEGA-OMEGA_lag_pol*g_b

> crit=(Z'XI)'(Z'XI)

> }

: void DL()

> {

> S=optimize_init()

> optimize_init_evaluator(S, &GMM_DL())

> optimize_init_evaluatortype(S,"d0")

> optimize_init_technique(S, "nm")

> optimize_init_nmsimplexdeltas(S, "select value")

> optimize_init_which(S,"min")

> optimize_init_params(S,(" bols_l bols_k bols_m "'))

> p=optimize(S)

> p

> st_matrix("beta_DL",p)

> }

: end

-------------------------------------------------------------------------------------------------------------------------

cap program drop dl

program dl, rclass

preserve

sort N time

mata DL()

end

Question: what is my next step? how can I get OMEGA and my beta's?? Can anyone share a link or some steps. Will this estimate productivity using LP or OP? What are the next commands?

Thanks!]]>

Code:

drop if STORY> 10

]]>

I want to compute the average of a variable (teamsize) for observations within a specific time period. I have a date variable formatted as such that I would like to use. Specifically, I have data on companies and their teams at different points in time. I want to calculate the average team size for a specific company within a given time period, always between the current date and 365 days prior to the observation.

. bysort company: egen mean(teamsize) if inrange()

is my best guess (sorry, new to stata and related programs in general!). I do not know how to specify inrange so that it takes the average of all observations with a date (variable is DATE, formatted as %td) in the range of the observation date and the 365 previous days. For example, if the teamsize was 55 on June 1st 2011, I want to create variable with a mean that takes into account all teamsizes from June 1st 2010 to June 1st 2011, including the team size of June 1st 2011.

It would be awesome if someone could help me out!

Best,

Julian]]>

I'm generating a spatial weighting matrix from mata and would like to create an Sp spatial matrix. I explored that " spmatrix spfrommata Wnew= W v" could copy a matrix from Mata to an Sp spatial weighting matrix.

But then an error shows: command spmatrix is unrecognized .

Then I tried "spmat spfrommata Wnew= W v" , an error shows again: spfrommata unknown subcommand

I tried to install this command but could not find the installation package. Does anyone know anything about this command? I'm using STATA 14. Thank you very much!]]>

However, as a dedicated Mata user/programmer for many years I'm also wondering what might be some of the main advantages of Python over Mata.

If you are experienced in both languages, would you be willing to suggest what might be some such advantages? Or perhaps point me to any relevant links that might provide such information?

Thanks in advance.]]>

Basically, it is a mata class being extended whenever new functionality in later versions becomes available.

I'm ususally working in version 15.

I'm quite lazy. So I made some code that opens Stata version 12 generates and saves mata in a mlib. Likewise for the extensions in versions 13, 14 and 15.

The problem is, how to reset my running version 15, such that it uses the new saved mlib from Stata 12 etc instead of the old ones.

The only solution I've found so far is closing and starting Stata.

Looking forward to hear from you]]>

I would like to investigate the effects of a natural disaster on individuals. Therefore I have been using a difference-in-difference method. My main regression so far is as given below:

Code:

xtreg outcome_var FY2007 FY2008 FY2009 FY2011 FY2012 FY2013 didFY2007 didFY2008 didFY2009 didFY2011 didFY2012 didFY2013 if inrange(wave,2008,2014), fe

Code:

,fe

Code:

FY2007 FY2008 FY2009 FY2011 FY2012 FY2013

Code:

didFY2007 didFY2008 didFY2009 didFY2011 didFY2012 didFY2013

Code:

didFY2007=treatment*financialyear2007

Now as an alternative way I want to use propensity score matching and want to use the

Code:

teffects psmatch

1. I have read the manual for the function but I am not sure what the function does. I have a treatment group (

Code:

intreatmentgroup==1

Code:

intreatmentgroup==0

Code:

teffects psmatch (outcome_var) (intreatmentgroup cleanedhigh1 cleanesempst hgsex hhtype cleanhstenr,probit)

Code:

cleanedhigh1 cleanesempst hgsex hhtype cleanhsten

3. The shock has been observed in 2010. Therefore a in my difference-in-difference estimation I can look at the effects before and after the shock through

Code:

did=treatment*post

Code:

post

4. Is it possible to control for fixed effects using

Code:

teffects psmatch

I know those are many questions but I have now been reading for 2-3 weeks about this and I still could not solve this issue. Any help or answer to any of the questions would be highly appreciated..

Kind regards.

]]>

I was wondering if there is someone who can help me with regards to my bar graph.

With the following code I created the graph shown below:

Now I am stuck with two questions:

graph bar (sum) PSPP CSPP CBPP3 ABSPP, over (Date_APP_M, label(angle (90) labsize(small)) relabel(1"Mar"2" "3" "4"Jun"5" "6" "7"Sep"8" "9" "10"Dec"11" "12" "13"Mar"14" "15" "16"Jun"17" "18" "19"Sep"20" "20" "22"Dec"23" "24" "25"Mar"26" "27" "28"Jun"29" "30" "31"Sep"32" "33" "34"Dec"35" "36" "37"Mar"38" "39" "40"Jun"41" "42" "43"Sep"44" "45" "46"Dec"47" "48" ")) ///

stack over (Date_APP_Y) ///

ylabel(, angle(0) labsize(small)) ///

ytitle("EUR (billions)") ///

note ("Source: History of APP redemptions, ECB website", size(vsmall)) ///

legend(label(1 "PSPP") label(2 "CSPP") label(3 "CBPP3") label(4 "ABSPP"))

(1) Can I add a line variable that shows the average monthly purchases of these four programs? I know I can add a straight line, but the line I want to add changes its value over time (from 60 to 80 back to 60, to 30, to 15). As far as I know (and I just started using stata, so I might be wrong), I can only add bars and lines in one graph with two-way. But as I am using groups, I cannot use a two-way graph?

(2) How I can reposition the labels of the x-axis? As it is now, the months under the bars don't correspond with the value of bar. As the first bar shown is supposed to be March, but now this is not the case.

Thank you for taking the time to read this.

Best,

Lisanne

]]>

"error in remcor

could not calculate numerical derivatives

missing values encountered

(error occurred in ML computation)

(use trace option and check correctness of initial model)

(probabilities will be stored in p_21 p_22 p_23 p_24 p_25 p_26 p_27 p_28)

cannot compute posterior probabilities for more than 1 continuous latent variable

r(301)"

Here is the code

forvalues i=2/10 {

* starting value

eq idc1:a

eq idc2:b

*the model

gllamm resp a b aHGE bsp blp bcp bgmr blog_Gov bequity agdp aupop afert_rate adependratio amav, /*

*/ i(id) nocons eqs(idc1 idc2) nrf(2) link(ident ident) family(gauss gauss) from(a) /*

*/ lv(var) fv(var) nip(`i') ip(f) skip

* defining a matrix equal to

matrix a_`i'=e(b)

matrix a_=e(b)

matrix loglik_`i'=e(ll)

matrix np_`i' = colsof(a_`i')

*posteriod values

gllapred p_`i', p

*predicted value

gllapred bivpre_`i', linpred

gllasim y_`i'

gllapred locaz`i',u

matrix locs`i'=e(zlc2)'

matrix lp`i' = e(zps2)'

}

]]>

I need to do this for around 100 persons in a sample, in a quarterly time span for 6 years.

From the error term I should then estimate the standard deviation.

Then I need to categorize the standard deviations of the error terms into four portfolios, from low to high.

How could I estimate the error for all persons at every different time all at once?

The data that I use is in Excel and looks like this:

Dates Name Variable1 Variable2 ... etc

Q12013 X 100

Q22013 X 103

Q32013 X 122

Q42013 X 98

Q12014 X 128

.......

.....

Q12013 Y 110

Q22013 Y 105

Q32013 Y 132

Q42013 Y 39

Q12014 Y 138

.......

.....

Q12013 Z 87

Q22013 Z 115

Q32013 Z 139

Q42013 Z 87

Q12014 Z 132]]>