Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Error with multiple imputation - mi estimate does not support mixed, dfmethod() r(198);

    Hello,

    I was successfully able to impute my data, but I get the following error when I reach the mi estimate step:

    Code:
    mi set wide
    mi register imputed testscore
    mi impute chained (regress) testscore = x1 x2 x3 i.school i.id, add(20) force noisily
    mi estimate: mixed testscore x y Month ||school: ||id:, covariance(unstructured) reml dfmethod(kroger)
    mi estimate does not support mixed, dfmethod() r(198);

    mi estimate works fine when I remove dfmethod(), but this is essential for correcting the degrees of freedom. Is there a way around this?

    Version 15.1

  • #2
    I do not know much about df correction in mixed. I know that mi has (to have) its own degrees of freedom for inference that take the imputed nature of the data into account. My guess is, the df correction in mixed is either invalid and/or irrelevant for multiply imputed data; but I could be wrong.

    Edit:

    My initial answer included the following statements:

    I know that the degrees of freedom do not affect the estimated parameters, i.e., coefficients and standard errors. I also know that mi only uses these estimated parameters.

    These statements remain, I believe, true in general. However, the Kenward-Roger method for correcting the degrees of freedom appears to use a adjusted covariance matrix. Therefore, this method changes the estimated parameters that mi uses. I would, therfore, like to slightly render my initial guess: The Kenward-Roger method might be relevant even in the mi context but there might just not be any studies that show that the adjusted covariance matrix is compatible/valid with the theory underlying mi.
    Last edited by daniel klein; 05 Jun 2020, 00:16. Reason: read up on dfmetod(kroger)

    Comment


    • #3
      Thanks Daniel!

      There is the nosmall option in the manual :

      nosmall specifies that no small-sample correction be made to the degrees of freedom. The smallsample correction is made by default to estimation commands that account for small samples. If the command stores residual degrees of freedom in e(df r), individual tests of coefficients (and transformed coefficients) use the small-sample correction of Barnard and Rubin (1999) and the overall model test uses the small-sample correction of Reiter (2007). If the command does not store residual degrees of freedom, the large-sample test is used and the nosmall option has no effect.

      Unfortunately predictive mean matching and logit do not store df in e(df r) so it is assumed that I have a large sample. I think this would inflate type 1 errors so I don't know how well that would go over with journal reviewers.

      Comment


      • #4
        I do not know where predictive mean matching comes into play here. Anyway, logit is based on ML which, in turn, derives all of its great properties from large sample/asymptotic theory (correct me if I am wrong). Form that perspective, correcting ML-based estimates for small sample seems a bit strange.

        Edit:

        On second thought, I do not even know what logit has to do with the initial request.

        Anyway, if you really wanted this, you could set up a wrapper for mixed, replace e(V) with e(V_df).

        I do not know which field you are in, but I would not worry about a paper being rejected because it does not correct the df.
        Last edited by daniel klein; 05 Jun 2020, 06:50.

        Comment


        • #5
          Regarding the pmm, I am replacing regress with it for the imputation.

          As I understand it are you referring to the following for the wrapper?

          Matrices
          e(df) parameter-specific DFs for the method specified in post()
          e(V_df) variance–covariance matrix of the estimators when kroger method is posted

          I am not quite sure what is meant here by a wrapper.

          Comment


          • #6
            I still do not see how the imputation model has anything to do with the topic.

            A wrapper is a very simple command that does very little on its own. Here is a sketch (contains pseudo-code and is not tested at all)

            Code:
            program mixed_kroger , eclass properties(mi)
                mixed ... , dfmethod(kroger)
                replace e(V) with e(V_df) 
            end
            Last edited by daniel klein; 07 Jun 2020, 12:01.

            Comment

            Working...
            X