Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Balancing matrices using entropy minimization function - mstdize

    mstdize from SSC is an iterative method of biproportional adjustment of rows and columns (developed by Nicholas Cox) that takes a table of var-name, with rows indexed by row-var and columns indexed by column-var, and produces a new table containing var-name scaled such that the row totals are given by row-total-var and the column totals given by column-total-var (see the help of mstdize for more details). This procedure is known by RAS technique for the users of input-output and social accounting matrices.

    An alternative way to achieve the same balancing goal is to minimize an objective function (such as minimizing an entropy function or the sum of squared deviations) conditional on the target new row totals and the target new column totals. The users of social accounting matrices typically use GAMS to balance their input-output tables given the new row and column totals.

    mstdize doesn't offer the possibility of minimizing an objective function while producing the target new table. I am wondering whether an alternative command in STATA offers the possibility to minimize an objective function (such as minimizing an entropy function or the sum of squared deviations) conditional on the target new row totals and the target new column totals to produce the target new table.

    Many thanks in advance for your help!

    Click image for larger version

Name:	Entropy.GIF
Views:	1
Size:	18.1 KB
ID:	1470230
    Last edited by Sami Bibi; 13 Nov 2018, 09:18.

  • #2
    mstdize is from SSC. Thanks for noticing it.

    I've not read about this in a long while, but my dim recollection is of various papers showing that several reasonable methods in this territory all generally produce equivalent results, including entropy maximizing (not minimizing, I think) and maximum likelihood. One pitfall I recall is never to start with any guessed row or column being zero as whatever the multiplying factor on the next iteration, such a zero is never going to change.

    The main idea goes back much further than RAS. Yule used it over a century ago without fuss.

    I think you can get such estimates out of a generalized linear model.

    Comment


    • #3
      Thank you so much M. Cox for your prompt reply. In many cases, RAS and Entropy procedures yield indeed very similar results. However, in many instances, we may have some lower band for different components (b_i,j > c_i,j). Unfortunately, with the RAS method, we cannot take into account straightforwardly those constraints.
      Thank you again and warm regards!

      Comment

      Working...
      X