<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
	<channel>
		<title>Statalist</title>
		<link>https://www.statalist.org/forums/</link>
		<description>vBulletin Forums</description>
		<language>en</language>
		<lastBuildDate>Tue, 07 Apr 2026 03:53:56 GMT</lastBuildDate>
		<generator>vBulletin</generator>
		<ttl>60</ttl>
		
		<item>
			<title>Pseudo-panel data and weights</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785612-pseudo-panel-data-and-weights</link>
			<pubDate>Mon, 06 Apr 2026 04:46:27 GMT</pubDate>
			<description>Hi all, this is my labor force survey 2020, and the structure of LFS 2021 is similar. As the survey subjects are different in the two surveys, I want...</description>
			<content:encoded><![CDATA[Hi all, this is my labor force survey 2020, and the structure of LFS 2021 is similar. As the survey subjects are different in the two surveys, I want to create a pseudo-panel by using Deaton (1985) method (<a href="https://www.princeton.edu/~deaton/downloads/Panel_Data_from_Time_Series_of_Cross_Sections.pdf" target="_blank">https://www.princeton.edu/~deaton/do...s_Sections.pdf</a>), I based my code on that, but I'm not sure if it's accurate, and second, how do we handle the individual weights variable when converting from repeated cross-sectional data to pseudo-data? Someone can help me please!!! Many thanks<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">clear
input double(monthsv province district commune householdID memberID urban weight sex year_birth age wage survey_year)
4 1 1 1 1 3 1 134.71753154175403 1 1984 35  6000 2020
1 1 1 1 1 3 1 134.71753154175403 1 1984 35  8000 2020
1 1 1 1 1 4 1 134.71753154175403 1 1987 32  9000 2020
4 1 1 1 1 4 1 134.71753154175403 1 1987 32  7000 2020
4 1 1 1 1 5 1 130.90478469928155 2 1989 30  4850 2020
1 1 1 1 1 5 1 130.90478469928155 2 1989 30  6850 2020
1 1 1 1 5 1 1 134.71753154175403 1 1960 59 30000 2020
4 1 1 1 5 1 1 134.71753154175403 1 1960 59 20000 2020
4 1 1 1 5 2 1 223.32760790596396 2 1961 58  15000 2020
1 1 1 1 5 2 1 223.32760790596396 2 1961 58  133000 2020
1 1 1 1 5 3 1 134.71753154175403 1 1989 30 13850 2020
4 1 1 1 5 3 1 134.71753154175403 1 1989 30 13850 2020
4 1 1 1 5 4 1 134.71753154175403 1 1987 33 14850 2020
1 1 1 1 5 4 1 134.71753154175403 1 1987 32 14850 2020
1 1 1 1 5 5 1 130.90478469928155 2 1990 29  8850 2020
end</pre>
</div>Here's my code to create pseudo (but do not done anything about weights:<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">gen birth_year = survey_year - age
gen birth_cohort = floor(birth_year/5)*5
collapse (count) n_obs=age, by(sex birth_cohort survey_year)
egen cohort_id = group(sex birth_cohort)
xtset cohort_id survey_year</pre>
</div>]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>LEE CHAN</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785612-pseudo-panel-data-and-weights</guid>
		</item>
		<item>
			<title>Access to FRED: Federal Reserve Economic Database</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785609-access-to-fred-federal-reserve-economic-database</link>
			<pubDate>Sun, 05 Apr 2026 19:13:28 GMT</pubDate>
			<description>My contact at the St Louis Fed verifies that programmatic FRED access now requires an authorized (free) API key. You can use a browser to describe a...</description>
			<content:encoded><![CDATA[My contact at the St Louis Fed verifies that programmatic FRED access now requires an authorized (free) API key. You can use a browser to describe a single series, but programmatic access to that page is not feasible using Stata, Python, or a combination thereof without an API key. I have worked with Claude today to verify that conclusion. Thus, the original <b>freduse</b> routine by David Drukker, which was updated by  John Coglianese and Jacob Williams in 2024 to deal with a new HTML format, is no longer workable. The <b>fredusex</b> routine that I wrote when freduse was broken is also unworkable, and neither routine can be readily modified to use an API key. Thus, you should rely on Stata's official <b>import fred </b>command to access FRED data. Here is Claude's useful summary of the development of community-contributed FRED access.<br />
<br />
The search confirms the key date: FRED launched API Version 2 in November 2025 <a href="https://news.research.stlouisfed.org/2025/11/fred-launches-new-version-of-api/" target="_blank">Stlouisfed</a>, which is almost certainly when the strict API key requirement for programmatic access was imposed on the older /data/ endpoint as well. Before that, the flat-file endpoint at fred.stlouisfed.org/data/ was freely accessible without authentication, which is what freduseand fredusex relied on.<br />
<br />
So the complete timeline is:<ul><li><b>2006</b>: Drukker writes freduse, scraping FRED's unauthenticated flat-file endpoint</li>
<li><b>2012</b>: FRED API v1 launches with optional key, but unauthenticated scraping still works so no one migrates</li>
<li><b>2024</b>: FRED changes their HTML page format, breaking freduse; Baum writes fredusex to adapt, still using the unauthenticated /data/ endpoint</li>
<li><b>Late 2024</b>: Others rewrite freduse to restore functionality, also still unauthenticated</li>
<li><b>November 2025</b>: FRED launches API v2 and simultaneously enforces API key requirements on the old endpoint, breaking all scraping-based approaches simultaneously</li>
<li><b>2026</b>: Only import fred (StataCorp) remains viable as a pure Stata solution</li>
</ul>]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>KitBaum</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785609-access-to-fred-federal-reserve-economic-database</guid>
		</item>
		<item>
			<title>How to create a 2×2 multi-panel graph combining horizontal bar charts and coefficient plots</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785605-how-to-create-a-2×2-multi-panel-graph-combining-horizontal-bar-charts-and-coefficient-plots</link>
			<pubDate>Sun, 05 Apr 2026 18:19:53 GMT</pubDate>
			<description>Hi everyone, 
 
I would like to create in Stata a figure similar to the one shown at the end of this post, using the example dataset provided below...</description>
			<content:encoded><![CDATA[Hi everyone,<br />
<br />
I would like to create in Stata a figure similar to the one shown at the end of this post, using the example dataset provided below (generated with dataex).<br />
<br />
The figure consists of four panels arranged in a 2×2 layout:<br />
<br />
• The top panels show horizontal bar charts with the share of workers in three categories:<br />
(i) wage increases, (ii) same wage, and (iii) wage declines<br />
Each category has two bars, corresponding to two groups: BF and Non-BF.<br />
<br />
• The bottom panels show coefficient plots with point estimates and 95% confidence intervals.<br />
These coefficients represent the difference (in percentage points) between BF and Non-BF for each category.<br />
<br />
• The left column corresponds to job stayers, while the right column corresponds to job movers.<br />
<br />
• The y-axis categories are shared across panels and ordered from top to bottom as:<br />
wage increases, same wage, and wage declines.<br />
<br />
Ideally, I would like:<br />
<br />
1. The bars in the top panels to be horizontal and grouped by category.<br />
2. The coefficient plots in the bottom panels to also be horizontal (with x-axis in percentage points).<br />
3. The y-axis categories to align perfectly across the top and bottom panels.<br />
4. A vertical reference line at zero in the bottom panels.<br />
5. A clean layout where the two columns correspond to &quot;Job Stayers&quot; and &quot;Job Movers&quot;.<br />
<br />
Could you please advise on how to implement this type of multi-panel figure in Stata?<br />
<br />
Many thanks in advance!<br />
<br />
Best,<br />
Otavio<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">* Example generated by -dataex-. For more info, type help dataex
clear
input str8 panel str6 group str20 category float(share coef ci_low ci_high)

&quot;Stayers&quot; &quot;NonBF&quot; &quot;wage increases&quot; 0.50 0.05 0.02 0.07
&quot;Stayers&quot; &quot;BF&quot; &quot;wage increases&quot; 0.55 0.05 0.02 0.07

&quot;Stayers&quot; &quot;NonBF&quot; &quot;same wage&quot; 0.30 -0.02 -0.04 0.00
&quot;Stayers&quot; &quot;BF&quot; &quot;same wage&quot; 0.25 -0.02 -0.04 0.00

&quot;Stayers&quot; &quot;NonBF&quot; &quot;wage declines&quot; 0.20 0.03 0.01 0.05
&quot;Stayers&quot; &quot;BF&quot; &quot;wage declines&quot; 0.20 0.03 0.01 0.05

&quot;Movers&quot; &quot;NonBF&quot; &quot;wage increases&quot; 0.40 0.04 0.02 0.06
&quot;Movers&quot; &quot;BF&quot; &quot;wage increases&quot; 0.45 0.04 0.02 0.06

&quot;Movers&quot; &quot;NonBF&quot; &quot;same wage&quot; 0.25 -0.01 -0.03 0.01
&quot;Movers&quot; &quot;BF&quot; &quot;same wage&quot; 0.20 -0.01 -0.03 0.01

&quot;Movers&quot; &quot;NonBF&quot; &quot;wage declines&quot; 0.35 0.02 0.00 0.04
&quot;Movers&quot; &quot;BF&quot; &quot;wage declines&quot; 0.35 0.02 0.00 0.04

end</pre>
</div><a href="filedata/fetch?filedataid=1785608">Array </a>]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Otavio Conceicao</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785605-how-to-create-a-2×2-multi-panel-graph-combining-horizontal-bar-charts-and-coefficient-plots</guid>
		</item>
		<item>
			<title>New SSC package: wdi_deflate: Convert monetary values across PPP, USD, and LCU</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785604-new-ssc-package-wdi_deflate-convert-monetary-values-across-ppp-usd-and-lcu</link>
			<pubDate>Sun, 05 Apr 2026 18:01:17 GMT</pubDate>
			<description>Introducing wdi_deflate, a Stata package for converting monetary variables — such as consumption, income, or expenditure from household surveys —...</description>
			<content:encoded><![CDATA[<br />
Introducing <b>wdi_deflate</b>, a Stata package for converting monetary variables — such as consumption, income, or expenditure from household surveys — to purchasing power parity (PPP) international dollars, nominal US dollars, or constant local currency units (LCUs) using World Bank World Development Indicator (WDI) data.<br />
<br />
Thanks to Kit Baum, wdi_deflate is available from SSC for Stata version 15 and higher. To install, type: <i>ssc install wdi_deflate</i>.<br />
<br />
The module downloads PPP conversion factors, Consumer Price Index (CPI), and official exchange rates from the WDI and applies them in a single command. Seven conversion paths are supported, including LCU→PPP, LCU→USD, CPI deflation, and rebasing across ICP rounds. It handles both cross-sectional and panel data.<br />
<br />
Here's a quick illustration. Suppose you have household consumption and income data in local currency for Ethiopia and Tanzania, collected in different years, and you want to convert to 2021 PPP international dollars:<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">clear
input str3 iso3 int year double consumption double income
&quot;ETH&quot; 2018 15000 22000
&quot;TZA&quot; 2019 850000 1200000
end
wdi_deflate consumption income, country(iso3) from(year) to(2021)
list</pre>
</div>This creates consumption_ppp2021 and income_ppp2021. Private consumption PPP is the default; add usd for nominal US dollars, deflate for constant LCU, or gdp for GDP-based PPP. WDI data are downloaded automatically.<br />
<br />
For reproducible research, you can save a deflator snapshot and reference it in subsequent conversions.<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">clear
wdi_deflate build, saving(deflator.dta) countries(ETH TZA)  
clear
input str3 iso3 int year double consumption
&quot;ETH&quot; 2018 15000
&quot;TZA&quot; 2019 850000
end
wdi_deflate consumption, country(iso3) from(year) to(2021) using(deflator.dta)
list</pre>
</div>wdi_deflate requires wbopendata (ssc install wbopendata) and tknz (ssc install tknz).<br />
<br />
For more details, type help wdi_deflate after installation. Feedback and suggestions are welcome.<br />
 ]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Kalle Hirvonen</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785604-new-ssc-package-wdi_deflate-convert-monetary-values-across-ppp-usd-and-lcu</guid>
		</item>
		<item>
			<title>Use of PERSONAL vs PLUS</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785603-use-of-personal-vs-plus</link>
			<pubDate>Sun, 05 Apr 2026 15:01:53 GMT</pubDate>
			<description>I have received materials to be posted on SSC recently from a couple of authors who also include a README.md file that they have maintained on...</description>
			<content:encoded><![CDATA[I have received materials to be posted on SSC recently from a couple of authors who also include a README.md file that they have maintained on Githiub. Those files contain advice such as &quot;&quot;Or manually: download all `.ado` and `.sthlp` files and place them in your personal ado directory (`adopath`).&quot;<br />
<br />
As I have told them, this is a <b>Really Bad Idea</b>. There is nothing wrong with manually downloading materials and putting them in the right place, but this is not the right place. Why?<br />
<br />
Because if they ever install the routine from SSC (or a version of that package from the Stata Journal archives), the files will be placed in the appropriate subdirectory of PLUS.  The ado update command will check to see if there is an up-to-date version in PLUS.<br />
<br />
Let's say the user has manually downloaded a  version in PERSONAL.  Some time later, the author fixes a bug and sends it to me, I update the SSC package, and they now download from SSC into PLUS after reading that a bug has been fixed.  But on the adopath, PERSONAL comes before PLUS, so they will never be able to execute the bug-fixed version, as it is occluded by the original version in PERSONAL.  This is all perfectly logical behavior on Stata's part, but could be a real hassle for users who are not aware of these distinctions.<br />
<br />
I recommend that they download the package using the ssc install command, which will then sync with ado update if they use that command (as they should). But if they download manually, they should use PLUS and create an appropriate subdirectory if necessary.]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>KitBaum</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785603-use-of-personal-vs-plus</guid>
		</item>
		<item>
			<title>How to create a two-facet figure combining a bar chart and a coefficient plot with different y-scales</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785601-how-to-create-a-two-facet-figure-combining-a-bar-chart-and-a-coefficient-plot-with-different-y-scales</link>
			<pubDate>Sun, 05 Apr 2026 14:56:34 GMT</pubDate>
			<description>Hi everyone, 
 
I would like to create in Stata a figure similar to the one shown at the end of this post, using the example dataset provided (shown...</description>
			<content:encoded><![CDATA[Hi everyone,<br />
<br />
I would like to create in Stata a figure similar to the one shown at the end of this post, using the example dataset provided (shown with dataex).<br />
<br />
The figure has <b>two vertically stacked panels (facets)</b>, combining a bar chart in the upper panel and a coefficient plot in the lower plot. Ideally, the y-scale in the upper panel is expressed as percentages (%) while the y-scale in the lower panel is expressed in percentage points (p.p.). I also would like the x-axis ticks from the lower panel to be aligned with the x-axis ticks from the upper panel so that the coefficient estimates appear aproximately in between the two bars from the upper panel (as shown in the mock figure).<br />
<br />
The mock figure is about labor market transitions (E=employed, U=unemployed, OLF=out of labor force) between one year and the following for group 1 and 2 (that is why there are two bars for each transition in the upper facet).<br />
<br />
Could you please advise on how to create a figure like this in Stata?<br />
<br />
Many thanks in advance!<br />
<br />
Best,<br />
Otavio<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">* Example generated by -dataex-. For more info, type help dataex
clear
input str6 transition float(group1 group2 coef ci_low ci_high)
&quot;E_OLF&quot; .18 .16 .030 .020 .045
&quot;E_U&quot; .20 .19 .015 .005 .025
&quot;E_E&quot; .23 .22 .040 .025 .055
end</pre>
</div> <a href="filedata/fetch?filedataid=1785602">Array </a><br />
]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Otavio Conceicao</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785601-how-to-create-a-two-facet-figure-combining-a-bar-chart-and-a-coefficient-plot-with-different-y-scales</guid>
		</item>
		<item>
			<title>The wofd() function returns an inconsistent week id???</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785593-the-wofd-function-returns-an-inconsistent-week-id</link>
			<pubDate>Sun, 05 Apr 2026 05:06:00 GMT</pubDate>
			<description><![CDATA[Hello everyone, 
As the title suggests, I've noticed that the week IDs aren't matching the date variable correctly. Specifically: 
 
 My goal is to...]]></description>
			<content:encoded><![CDATA[Hello everyone,<br />
As the title suggests, I've noticed that the week IDs aren't matching the date variable correctly. Specifically:<ul><li>My goal is to create a unique variable that identifies a week (from a variable containing transaction date information) accurately and consistently.</li>
<li>From the variable <b>date </b>formatted as <i>%td</i>, I create a variable <b>w</b> containing the week ID (=1 for 1960w1, =2 for 1960w2, ...) and a variable <b>dow</b> containing the IDs of the days of the week (=1 for Monday, =6 for Saturday, and =0 for Sunday).</li>
<li>At the end of the data, the values ​​of <b>w</b> and <b>dow</b> match in the sense that days of the same week will have the same value of <b>w</b> (as in figure 1). But this is no longer true at the beginning of the data (and seemingly in the middle as well) as in figure 2.</li>
</ul>I'm not sure if this is a buff or a change in the general convention of determining which day of the week should start. If I want to achieve consistency like the result in image 1 for the entire sample, how should I do it?<br />
All comments are appreciated.<br />
Below is the data along with the code and related figures.<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">. import excel &quot;Bitcoin &quot;, sheet(Sheet2) first case(lower) clear
(3 vars, 4,999 obs)

. desc

Contains data
 Observations:         4,999                  
    Variables:             3                  
-----------------------------------------------------------------------------------------------
Variable      Storage   Display    Value
    name         type    format    label      Variable label
-----------------------------------------------------------------------------------------------
date            int     %td..                 date
price           str8    %9s                   price
btc             double  %10.0g                btc
-----------------------------------------------------------------------------------------------
Sorted by: 
     Note: Dataset has changed since last saved.

. gen w = wofd(date)

. gen dow = dow(date)</pre>
</div><a href="filedata/fetch?filedataid=1785594">Array </a><a href="filedata/fetch?filedataid=1785595">Array </a><br />
 ]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Manh Hoang Ba</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785593-the-wofd-function-returns-an-inconsistent-week-id</guid>
		</item>
		<item>
			<title>-stripplot- package updated on SSC: new command -strip-</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785586-stripplot-package-updated-on-ssc-new-command-strip</link>
			<pubDate>Sat, 04 Apr 2026 21:10:00 GMT</pubDate>
			<description>Thanks as ever to Kit Baum, the stripplot package has been updated. The update consists of a new command strip, which requires Stata 9 or higher.  
...</description>
			<content:encoded><![CDATA[Thanks as ever to Kit Baum, the <span style="font-family:courier new">stripplot </span><span style="font-family:arial">package has been updated. The update consists of a new command</span><span style="font-family:courier new"> strip</span><span style="font-family:arial">, which requires Stata 9 or higher. </span><br />
<br />
<span style="font-family:courier new">stripplot </span><span style="font-family:arial">is still included -- I don't want to break any scripts that might be using it -- but </span><span style="font-family:courier new">stripplot </span><span style="font-family:arial">has reached the end of its road, and I won't enhance it in future (although I will try to fix any further  bugs reported).</span><br />
<br />
<span style="font-family:courier new">strip</span><span style="font-family:arial"> is </span><span style="font-family:courier new">stripplot</span><span style="font-family:arial"> reduced in code complexity.  So why did I do that? </span><br />
<br />
<span style="font-family:courier new">stripplot</span><span style="font-family:arial"> was born as </span><span style="font-family:courier new">onewplot</span><span style="font-family:arial"> in 1999 (the rather odd name had to fit within the</span><span style="font-family:courier new"> filename.ext</span><span style="font-family:arial"> pattern for filenames in MS-DOS -- filenames could be shorter, but not longer). It was renamed as </span><span style="font-family:courier new">onewayplot </span>in 2003 on being rewritten for Stata 8, and then renamed <span style="font-family:courier new">stripplot</span> in 2005. <br />
<br />
<span style="font-family:courier new">stripplot</span> has accumulated options over the years, but I becaime dissatisfied with it for various reasons. The options now removed in the cut-down version <span style="font-family:courier new">strip </span>concerned extras: adding confidence interval bars, boxplots of various styles, Tufte-style quartile or midgap plots. and reference lines. The loss of functionality is much less than might be guessed, as you can still add details to the graphs through its <span style="font-family:courier new">addplot()</span> option. <br />
<br />
Confidence interval calculations in <span style="font-family:courier new">stripplot </span>were geared to<span style="font-family:courier new"> ci </span>as it existed before Stata 14.1, when the syntax was changed.  An updafe of same kind was overdue. It may seem odd that the update consisted of removing the options, but what <span style="font-family:courier new">stripplot </span>can do (and more) is so far as I am concerrned better done by reversing the order of operations.  <br />
<br />
<span style="font-family:courier new">cisets </span>from SSC (and the <i>Stata Journal</i> from 26(2) -- the next issue, due around June) produces confidence interval sets for various summary measures, after which confidence intervals can be plotted directly. <a href="https://www.statalist.org/forums/forum/general-stata-discussion/general/1766637-cisets-downloadable-from-ssc-confidence-interval-sets" target="_blank">https://www.statalist.org/forums/for...-interval-sets</a> explains and exemplifies in more detail, as does that paper forthcoming in the <i>Stata Journal. </i><br />
<br />
The other options that have been removed were not outdated, but the main reason for removing them was  a sense that the syntax had become too complicated. The help file still includes a large number of references in its territory, largely because of a personal habit of using help files as my notes on projects based on Stata commands, some of which get written up in due course as papers in the <i>Stata Journal</i>. If I think up new options or new examples or encounter new references, they tend to get added to the help file. <br />
<br />
The help for<span style="font-family:courier new"> strip </span>includes the code for 30 graph examples, which can be run directly using the ancillary file <span style="font-family:courier new">strip_examples.do</span><br />
<br />
<span style="font-family:arial">You can install</span><span style="font-family:courier new"> strip</span><span style="font-family:arial"> by installing the</span><span style="font-family:courier new"> stripplot </span><span style="font-family:arial">package for the first time or updating your installation if you've installed it before. <br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">ssc install stripplot 

ssc install stripplot, replace</pre>
</div>Here is a sampler of graphs from </span><span style="font-family:courier new">strip</span><span style="font-family:arial">. </span><br />
<br />
<a href="filedata/fetch?filedataid=1785587">Array </a> <a href="filedata/fetch?filedataid=1785588">Array </a> <a href="filedata/fetch?filedataid=1785589">Array </a> <a href="filedata/fetch?filedataid=1785590">Array </a> <a href="filedata/fetch?filedataid=1785591">Array </a>]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Nick Cox</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785586-stripplot-package-updated-on-ssc-new-command-strip</guid>
		</item>
		<item>
			<title><![CDATA[format option not working in Stata's rotate command?]]></title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785575-format-option-not-working-in-stata-s-rotate-command</link>
			<pubDate>Sat, 04 Apr 2026 00:47:33 GMT</pubDate>
			<description><![CDATA[Hello! 
 
Does anyone know why the following code doesn't seem to be causing the returned results to be reported to 2 decimal places (it's returning...]]></description>
			<content:encoded><![CDATA[Hello!<br />
<br />
Does anyone know why the following code doesn't seem to be causing the returned results to be reported to 2 decimal places (it's returning the usual 4 decimal places)?<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">factor iees_01 iees_03 iees_04 iees_05 iees_06 iees_07 iees_08 iees_09 iees_10 iees_11 iees_12 iees_13 iees_15 iees_17 iees_18 iees_20 if val_ind == 0, ipf citerate(1000) factors(4)
rotate, promax(3) oblique blank(0.32) format(%9.2f)</pre>
</div>The documentation for rotate says that format() should be an available option (if I'm reading it correctly).<br />
<br />
I'm running Stata 17.0 if that's important.<br />
<br />
Thank you!]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Evan Sommer</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785575-format-option-not-working-in-stata-s-rotate-command</guid>
		</item>
		<item>
			<title>error with reghdfejl. name:  too many specified</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785569-error-with-reghdfejl-name-too-many-specified</link>
			<pubDate>Fri, 03 Apr 2026 15:47:26 GMT</pubDate>
			<description>Hi all, 
 
I am new using Julia and reghdfejl SSC, Iam comparing results and time with reghdfe, I install Julia and check with naive command: 
 
jl:...</description>
			<content:encoded><![CDATA[Hi all,<br />
<br />
I am new using Julia and reghdfejl SSC, Iam comparing results and time with reghdfe, I install Julia and check with naive command:<br />
<br />
jl: sqrt(2)<br />
1.4142135623730951<br />
<br />
unfortunally i am getting the error: name: too many specified, when I run the example: reghdfejl ln_wage grade , absorb(idcode year):<br />
<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">
// net install reghdfejl, replace from(https://raw.github.com/droodman/reghdfejl/v0.3.0)

clear all
webuse nlswork
expand(1000)

**************************************
// para MP
set processors 6

timer clear 1
timer on 1  

reghdfe ln_wage grade age ttl_exp tenure not_smsa south , absorb(idcode year)


(MWFE estimator converged in 8 iterations)
note: grade is probably collinear with the fixed effects (all partialled-out values are close to zero; tol = 1.0e-09)

HDFE Linear regression                            Number of obs   = 28,091,000
Absorbing 2 HDFE groups                           F(   5,28086284)=  314795.91
                                                  Prob &gt; F        =     0.0000
                                                  R-squared       =     0.6852
                                                  Adj R-squared   =     0.6851
                                                  Within R-sq.    =     0.0531
                                                  Root MSE        =     0.2681

------------------------------------------------------------------------------
     ln_wage | Coefficient  Std. err.      t    P&gt;|t|     [95% conf. interval]
-------------+----------------------------------------------------------------
       grade |          0  (omitted)
         age |   .0114497    .000288    39.76   0.000     .0108853    .0120142
     ttl_exp |   .0323758   .0000434   745.86   0.000     .0322907    .0324608
      tenure |   .0104689   .0000267   391.72   0.000     .0104165    .0105213
    not_smsa |  -.0914148   .0002781  -328.76   0.000    -.0919598   -.0908698
       south |  -.0640471   .0003189  -200.84   0.000    -.0646721   -.0634221
       _cons |    1.16142   .0083786   138.62   0.000     1.144998    1.177842
------------------------------------------------------------------------------

Absorbed degrees of freedom:
-----------------------------------------------------+
 Absorbed FE | Categories  - Redundant  = Num. Coefs |
-------------+---------------------------------------|
      idcode |      4697           0        4697     |
        year |        15           1          14     |
-----------------------------------------------------+


timer off 1          
timer list 1

**************************************
*JULIA

timer clear 2
timer on 2  

reghdfejl ln_wage grade age ttl_exp tenure not_smsa south , absorb(idcode year) gpu
name:  too many specified

timer off 2          
timer list 2

I also tried with the same error:

reghdfejl ln_wage grade
name: too many specified</pre>
</div><br />
<br />
 ]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Rodrigo Badilla</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785569-error-with-reghdfejl-name-too-many-specified</guid>
		</item>
		<item>
			<title>Comprehensive update of midas available on SSC</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785563-comprehensive-update-of-midas-available-on-ssc</link>
			<pubDate>Fri, 03 Apr 2026 08:34:59 GMT</pubDate>
			<description>Thanks to Kit Baum, an updated midas package is now available through SSC archive. MIDAS (Meta-analytical Integration of Diagnostic Accuracy Studies)...</description>
			<content:encoded><![CDATA[Thanks to Kit Baum, an updated midas package is now available through SSC archive. MIDAS (Meta-analytical Integration of Diagnostic Accuracy Studies) provides a comprehensive suite of commands for bivariate meta-analysis of diagnostic test accuracy data.<br />
<br />
The package includes:<br />
<br />
• 5 estimation methods: maximum likelihood (meglm), quasi-random simulated likelihood, Bayesian Metropolis-Hastings (bayesmh), Hamiltonian Monte Carlo (CmdStan), and integrated nested Laplace approximation (R-INLA)<br />
• 8 exploratory commands: QUADAS/QUADAS-2 quality assessment, bivariate boxplots, chi-plots, Kendall concordance, assessment diagnostics, binomial sample size estimation, and exploratory forest plots<br />
• 9 post-estimation commands: summary forest plots (4 plot types), SROC curves (regressional and bivariate), Fagan nomograms, likelihood ratio matrices, conditional probability plots, publication bias testing, Bayesian diagnostic plots, and clinical utility analysis (HSRUC)<br />
• 2 heterogeneity investigation commands: stratified subgroup analysis and bivariate meta-regression with comparative SROC plots<br />
• 5 data conversion utilities: simulation, ordinal-to-binary, continuous-to-binary, cluster-to-binary, and IPD-to-aggregate<br />
• GUI dialogs for 29 subcommands<br />
<br />
The package requires Stata 16+ and community-contributed bayesparallel,  xsvmat and moremata packages. External software (R-INLA, CmdStan) is needed only for the inla and hmc estimators.<br />
<br />
Ben A. Dwamena]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Ben A. Dwamena</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785563-comprehensive-update-of-midas-available-on-ssc</guid>
		</item>
		<item>
			<title>New package for mobile push notifications via telegram available on GitHub</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785560-new-package-for-mobile-push-notifications-via-telegram-available-on-github</link>
			<pubDate>Fri, 03 Apr 2026 04:24:45 GMT</pubDate>
			<description>Hi everyone, 
 
In the spirit of the Easter long weekend, I’m releasing a new package that might help some of you spend less time at your desk and...</description>
			<content:encoded><![CDATA[Hi everyone,<br />
<br />
In the spirit of the Easter long weekend, I’m releasing a new package that might help some of you spend less time at your desk and more time hunting for eggs (or just enjoying a coffee/wine in peace).<br />
<br />
The package is called <b>telegram</b>. It’s a no-nonsense tool to send free push notifications (mobile alerts) and exported Stata figures directly to your smartphone or desktop via the Telegram messenger API.<br />
<br />
<b>Why use this?</b> We’ve all had those long-running Monte Carlo simulations, bootstraps, or server batch jobs that leave us tethered to the office. While Stata can send emails via mail, configuring SMTP servers is often a headache—especially on restricted networks. telegram uses the OS-native curl command to bypass those hurdles entirely.<br />
<br />
<b>Key Technical Features:</b><ul><li><b>Interactive Setup:</b> Running telegram setup once stores your Bot Token and Chat ID in your PERSONAL directory (VDI users: remember to point your sysdir to a persistent drive).</li>
<li><b>Graph Support:</b> Pushes .png or .jpg exports directly to your device so you can review results remotely.</li>
<li><b>Unicode &amp; Chunking:</b> Fully Unicode-aware; it automatically splits messages exceeding 4,000 characters to ensure complex characters/emojis don't break the API boundary.</li>
<li><b>Shortcuts:</b> Includes a tg alias for quick status updates in your do-files.</li>
</ul><b>Installation:</b> The package has been submitted to the SSC. Until it is processed, you can install the stable version directly from GitHub:<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">net install telegram, from(&quot;<a href="https://raw.githubusercontent.com/DObst/stata-telegram/main/" target="_blank">https://raw.githubusercontent.com/DO...telegram/main/</a>&quot;)</pre>
</div><b>Setup:</b> If you have the Telegram App on your phone (iPhone or Android), the package will guide you through setting up the API. It's as simple as running:<br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">telegram setup</pre>
</div><b>Quick Example:</b><br />

<div class="bbcode_container">
	<div class="bbcode_description">Code:</div>
	<pre class="bbcode_code">sysuse auto, clear
scatter price mpg
graph export &quot;results.png&quot;, as(png) replace
tg &quot;Model finished. R-squared is 0.85. || Let's pretend that's causal...&quot;, figure(&quot;results.png&quot;)</pre>
</div>I've tested this on Windows and macOS using <b>Stata 19.5</b>. It should theoretically run on any version from 17 onwards due to the Unicode requirements, but I’d welcome feedback from anyone running earlier versions.<br />
<br />
Best,<br />
<br />
Daniel Obst [<a href="https://github.com/DObst/stata-telegram" target="_blank">https://github.com/DObst/stata-telegram</a>]]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Dan Obst</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785560-new-package-for-mobile-push-notifications-via-telegram-available-on-github</guid>
		</item>
		<item>
			<title>model fitted on these data fails to meet the asymptotic assumptions of the Hausman test</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785553-model-fitted-on-these-data-fails-to-meet-the-asymptotic-assumptions-of-the-hausman-test</link>
			<pubDate>Thu, 02 Apr 2026 22:52:37 GMT</pubDate>
			<description>Hi guys, I am a current economics student and am struggling with Stata and would be very grateful for any guidance.  
 
My study aims to understand...</description>
			<content:encoded><![CDATA[Hi guys, I am a current economics student and am struggling with Stata and would be very grateful for any guidance. <br />
<br />
My study aims to understand the relationship between the decline in male regular employment caused by the lost decade and how this is affecting female non regular labor force participation. <br />
The dependent variable is male reg employment, and the independent variable is female non reg employment. <br />
<br />
I am using secondary and tertiary sector share as control variables to make sure the relationship between the decline in male employment and female non regular employment is to do with the Japanese labour market fundamentally changing - nothing to do with deindustrialization, for example. <br />
<br />
I have data from 1982 to 2022 in five year intervals for each prefecture of Japan with female and male non regular and reg employment figure and the sectoral employment breakdown as well. The sum of non regular and regular do not account for all employment though - I have not included family workers and self employed ppl etc. <br />
<br />
my question is:<br />
I started off with a pooled OLS with an equation ln(Fnonregular)=beta0 + beta1ln(Mregular)it + beta2SecondaryShareit + beta3TertiaryShare +eit<br />
<br />
To then check whether i should use the RM or pooled, I ran a an LM test which rejected the null hypothesis. <br />
I also ran the F test which favoured the FM. <br />
<br />
Then to choose between the two, I ran a Hausman test but I keep getting the same error:<br />
chi2(3) = (b-B)'[(V_b-V_B)^(-1)](b-B)<br />
        = -219.03<br />
<br />
Warning: chi2 &lt; 0 ==&gt; model fitted on these data<br />
         fails to meet the asymptotic assumptions<br />
         of the Hausman test; see suest for a<br />
         generalized test.<br />
I tried doing the xtoverid but that doesnt work either stating some installation issue. <br />
<br />
I'm wondering if this issue is arising because I don thave enough control variables or data errors. <br />
<br />
I am not that clued up on Stata and would love some guidance. <br />
<br />
I have attached screenshots from what I have managed to run. ]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Tiffany Volken</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785553-model-fitted-on-these-data-fails-to-meet-the-asymptotic-assumptions-of-the-hausman-test</guid>
		</item>
		<item>
			<title>Hausman test negative result</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785545-hausman-test-negative-result</link>
			<pubDate>Thu, 02 Apr 2026 17:16:16 GMT</pubDate>
			<description>Hi everyone,  
 
I am doing a panel data approach that has data for female regular and non regular employment and male regular and non regular...</description>
			<content:encoded><![CDATA[Hi everyone, <br />
<br />
I am doing a panel data approach that has data for female regular and non regular employment and male regular and non regular employment data for each prefecture over time in 5 year intervals. I am seeing how the lost decade has caused a change in male regular work and what effect that has had on female non regular employment. <br />
<br />
I started off by doing a pooled OLS estimator first and wrote down the equation and analysed by saying there unobserved prefecture specific characteristics and then added an individual heterogeneity term to get the individual effects model. Then I decide whether I should use REM or FEM and use the hausman test to see which is better. I then get the results chi2 (3) = -219.03; Prob&gt;chi2 = 0.0000 and a message from state saying model fails to meet the asymptotic assumptions of the Hausman test.<br />
<br />
I then proceed to run the F test to see if the FEM should be used over the pooled OLS and get [F(46, 3717) = 45.16; Prob &gt; F = 0.000] <br />
<br />
is this enough to say I am going to use the FEM model or should I use the Breusch Pagan LM test to see for REM?<br />
<br />
I also have to think about the lost decade and how this has shaped. <br />
<br />
I'm just confused as I cant use the hausman test and don't really know if this is because of my data or stats commands?<br />
<br />
 ]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Tiffany Volken</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785545-hausman-test-negative-result</guid>
		</item>
		<item>
			<title>A bug with SPSS imports and 81 byte variable labels</title>
			<link>https://www.statalist.org/forums/forum/general-stata-discussion/general/1785538-a-bug-with-spss-imports-and-81-byte-variable-labels</link>
			<pubDate>Thu, 02 Apr 2026 13:14:55 GMT</pubDate>
			<description><![CDATA[Stata limits variable labels to a maximum of 80 characters. SPSS, on the other hand, limits variable labels to a maximum of 256 bytes. Stata's...]]></description>
			<content:encoded><![CDATA[Stata limits variable labels to a maximum of 80 characters. SPSS, on the other hand, limits variable labels to a maximum of 256 bytes. Stata's documentation states:<br />
<br />
<div class="bbcode_container">
	<div class="bbcode_quote">
		<div class="quote_container">
			<div class="bbcode_quote_container vb-icon vb-icon-quote-large"></div>
			
				If an SPSS variable label is too long, it will be truncated to 80 characters, and the original variable label will be stored as a variable characteristic.
			
		</div>
	</div>
</div>This is wrong on two counts. The first is that after importing an .sav file, variable labels are truncated to 80 <i>bytes</i>, not 80 characters. If your labels are purely ASCII characters, you will not notice the difference. But if your labels are written in a script where each character is multiple bytes, like Arabic, you'll notice quite quickly. Your label will be half as long or shorter than what Stata can actually store, and truncation will frequently occur partway through a character, leaving an invalid Unicode character at the end (appearing as �).<br />
<br />
There's a chance there is some esoteric reason for doing it this way, and that this is not a bug but rather a mistake in the documentation. But what is almost surely a bug is that the original variable label is only stored as a variable characteristic (named spss_variable_label) if it is 82 bytes or longer. If you import a variable with an 81 byte label, the last byte is simply lost and not recoverable in the Stata data, existing neither in the 80 byte label nor in a variable characteristic.<br />
<br />
If any of this behavior is fixed or otherwise changed in a future update, I would really appreciate it if StataCorp could reply letting me know which version has changed it. I have written a command for internal use at my company that in one step has to match up variables in Stata with variables in a different file format, and it explicitly takes all of this odd behavior into account.<br />
<br />
(I don't believe this behavior is version dependent, but just in case, I am running the latest version of Stata 19.5, born date 18 Feb 2026, compile number 195038, on MacOS 14.7.4)]]></content:encoded>
			<category domain="https://www.statalist.org/forums/forum/general-stata-discussion/general">General</category>
			<dc:creator>Jackie Zellerite</dc:creator>
			<guid isPermaLink="true">https://www.statalist.org/forums/forum/general-stata-discussion/general/1785538-a-bug-with-spss-imports-and-81-byte-variable-labels</guid>
		</item>
	</channel>
</rss>
