Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mixed effects failing to converge

    Hello,

    I am trying to work on a mixed-effects model using this data. However, I am unable to reach convergence when using mle with 'mixed' command in STATA. This is how I've structured the data:

    Level 1 (minutes): actigraph measurements.
    Level 2(day): no day level covariates.
    Level 3(individual): gender, sex, education, marriage, work, depression (madrs1, madrs2), average depression score, difference in depression score (madrs1 - madrs2). Depression scores, marriage, work and education are missing for control group. i shall retain this missingness in the merged dataset as well since mixed effects are robust to missing at random (MAR) covariates.
    level 4 (groups): control vs condition. no group level covariates.

    I have around 12,00,000 rows.

    This was the code:

    mixed ln_act || group_name:, mle

    here's my output:


    Performing EM optimization ...

    Performing gradient-based optimization:
    Iteration 0: Log likelihood = -2905027.6
    Iteration 1: Log likelihood = -2905027.6
    Iteration 2: Log likelihood = -2905027.6 (backed up)
    Iteration 3: Log likelihood = -2905027.6 (backed up)
    Iteration 4: Log likelihood = -2905027.6 (backed up)
    Iteration 5: Log likelihood = -2905027.6 (backed up)
    Iteration 6: Log likelihood = -2905027.6 (backed up)
    Iteration 7: Log likelihood = -2905027.6 (backed up)
    Iteration 8: Log likelihood = -2905027.6 (backed up)
    --Break--
    r(1);

    With xtreg, I find reasonable values:

    . xtreg ln_act, re

    Random-effects GLS regression Number of obs = 1,215,378
    Group variable: group_name1 Number of groups = 55

    R-squared: Obs per group:
    Within = 0.0000 min = 16,680
    Between = 0.0000 avg = 22,097.8
    Overall = 0.0000 max = 31,473

    Wald chi2(0) = .
    corr(u_i, X) = 0 (assumed) Prob > chi2 = .

    ------------------------------------------------------------------------------
    ln_act | Coefficient Std. err. z P>|z| [95% conf. interval]
    -------------+----------------------------------------------------------------
    _cons | 3.142652 .0769708 40.83 0.000 2.991792 3.293512
    -------------+----------------------------------------------------------------
    sigma_u | .57055188
    sigma_e | 2.6409285
    rho | .04459288 (fraction of variance due to u_i)
    ------------------------------------------------------------------------------

    .


    However, I wish to implement random slopes as well. Would be very grateful if anyone could help me out!


    My data looks like this:

    Code:
    * Example generated by -dataex-. For more info, type help dataex
    clear
    input str19 timestamp str10 date float activity int minute byte day str12 group_name byte(gender inpatient marriage work madrs1 madrs2 avg_age avg_edu) float avg_madrs byte(delta_madrs group) int sequence float(ln_act group_name1)
    "2003-05-07 12:00:00" "2003-05-07"  1.1   1 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   1  .0953102 1
    "2003-05-07 12:01:00" "2003-05-07"  143   2 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   2  4.962845 1
    "2003-05-07 12:02:00" "2003-05-07"  1.1   3 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   3  .0953102 1
    "2003-05-07 12:03:00" "2003-05-07"   20   4 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   4  2.995732 1
    "2003-05-07 12:04:00" "2003-05-07"  166   5 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   5  5.111988 1
    "2003-05-07 12:05:00" "2003-05-07"  160   6 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   6  5.075174 1
    "2003-05-07 12:06:00" "2003-05-07"   17   7 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   7  2.833213 1
    "2003-05-07 12:07:00" "2003-05-07"  646   8 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   8  6.470799 1
    "2003-05-07 12:08:00" "2003-05-07"  978   9 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2   9  6.885509 1
    "2003-05-07 12:09:00" "2003-05-07"  306  10 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  10  5.723585 1
    "2003-05-07 12:10:00" "2003-05-07"  277  11 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  11  5.624018 1
    "2003-05-07 12:11:00" "2003-05-07"  439  12 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  12  6.084499 1
    "2003-05-07 12:12:00" "2003-05-07"  130  13 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  13  4.867535 1
    "2003-05-07 12:13:00" "2003-05-07"   32  14 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  14  3.465736 1
    "2003-05-07 12:14:00" "2003-05-07"  184  15 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  15  5.214936 1
    "2003-05-07 12:15:00" "2003-05-07"  454  16 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  16  6.118097 1
    "2003-05-07 12:16:00" "2003-05-07"  783  17 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  17  6.663133 1
    "2003-05-07 12:17:00" "2003-05-07"  386  18 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  18  5.955837 1
    "2003-05-07 12:18:00" "2003-05-07"  306  19 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  19  5.723585 1
    "2003-05-07 12:19:00" "2003-05-07"  120  20 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  20  4.787492 1
    "2003-05-07 12:20:00" "2003-05-07"  268  21 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  21  5.590987 1
    "2003-05-07 12:21:00" "2003-05-07"  268  22 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  22  5.590987 1
    "2003-05-07 12:22:00" "2003-05-07"  204  23 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  23   5.31812 1
    "2003-05-07 12:23:00" "2003-05-07"  485  24 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  24  6.184149 1
    "2003-05-07 12:24:00" "2003-05-07"  485  25 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  25  6.184149 1
    "2003-05-07 12:25:00" "2003-05-07"  328  26 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  26  5.793014 1
    "2003-05-07 12:26:00" "2003-05-07"   61  27 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  27 4.1108737 1
    "2003-05-07 12:27:00" "2003-05-07"  172  28 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  28  5.147494 1
    "2003-05-07 12:28:00" "2003-05-07" 1221  29 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  29  7.107426 1
    "2003-05-07 12:29:00" "2003-05-07"  783  30 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  30  6.663133 1
    "2003-05-07 12:30:00" "2003-05-07"  398  31 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  31  5.986452 1
    "2003-05-07 12:31:00" "2003-05-07"  469  32 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  32  6.150603 1
    "2003-05-07 12:32:00" "2003-05-07"  190  33 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  33  5.247024 1
    "2003-05-07 12:33:00" "2003-05-07"  242  34 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  34  5.488938 1
    "2003-05-07 12:34:00" "2003-05-07"  242  35 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  35  5.488938 1
    "2003-05-07 12:35:00" "2003-05-07"  212  36 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  36  5.356586 1
    "2003-05-07 12:36:00" "2003-05-07"   91  37 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  37 4.5108595 1
    "2003-05-07 12:37:00" "2003-05-07"  116  38 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  38   4.75359 1
    "2003-05-07 12:38:00" "2003-05-07"  259  39 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  39  5.556828 1
    "2003-05-07 12:39:00" "2003-05-07"  667  40 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  40   6.50279 1
    "2003-05-07 12:40:00" "2003-05-07"  783  41 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  41  6.663133 1
    "2003-05-07 12:41:00" "2003-05-07"  469  42 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  42  6.150603 1
    "2003-05-07 12:42:00" "2003-05-07"  485  43 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  43  6.184149 1
    "2003-05-07 12:43:00" "2003-05-07"  587  44 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  44  6.375025 1
    "2003-05-07 12:44:00" "2003-05-07"  568  45 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  45  6.342122 1
    "2003-05-07 12:45:00" "2003-05-07"  306  46 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  46  5.723585 1
    "2003-05-07 12:46:00" "2003-05-07"  134  47 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  47   4.89784 1
    "2003-05-07 12:47:00" "2003-05-07"  242  48 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  48  5.488938 1
    "2003-05-07 12:48:00" "2003-05-07"  139  49 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  49  4.934474 1
    "2003-05-07 12:49:00" "2003-05-07"  235  50 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  50  5.459586 1
    "2003-05-07 12:50:00" "2003-05-07"  197  51 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  51  5.283204 1
    "2003-05-07 12:51:00" "2003-05-07"  667  52 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  52   6.50279 1
    "2003-05-07 12:52:00" "2003-05-07"  517  53 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  53  6.248043 1
    "2003-05-07 12:53:00" "2003-05-07"  328  54 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  54  5.793014 1
    "2003-05-07 12:54:00" "2003-05-07"  759  55 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  55  6.632002 1
    "2003-05-07 12:55:00" "2003-05-07"    8  56 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  56 2.0794415 1
    "2003-05-07 12:56:00" "2003-05-07"  306  57 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  57  5.723585 1
    "2003-05-07 12:57:00" "2003-05-07"  689  58 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  58  6.535241 1
    "2003-05-07 12:58:00" "2003-05-07"  469  59 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  59  6.150603 1
    "2003-05-07 12:59:00" "2003-05-07"  197  60 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  60  5.283204 1
    "2003-05-07 13:00:00" "2003-05-07"  306  61 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  61  5.723585 1
    "2003-05-07 13:01:00" "2003-05-07"  286  62 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  62  5.655992 1
    "2003-05-07 13:02:00" "2003-05-07"   12  63 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  63  2.484907 1
    "2003-05-07 13:03:00" "2003-05-07"  1.1  64 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  64  .0953102 1
    "2003-05-07 13:04:00" "2003-05-07"  130  65 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  65  4.867535 1
    "2003-05-07 13:05:00" "2003-05-07"  160  66 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  66  5.075174 1
    "2003-05-07 13:06:00" "2003-05-07"  296  67 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  67   5.69036 1
    "2003-05-07 13:07:00" "2003-05-07"  317  68 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  68  5.758902 1
    "2003-05-07 13:08:00" "2003-05-07"  338  69 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  69  5.823046 1
    "2003-05-07 13:09:00" "2003-05-07"  277  70 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  70  5.624018 1
    "2003-05-07 13:10:00" "2003-05-07"   88  71 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  71  4.477337 1
    "2003-05-07 13:11:00" "2003-05-07"   79  72 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  72 4.3694477 1
    "2003-05-07 13:12:00" "2003-05-07"  197  73 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  73  5.283204 1
    "2003-05-07 13:13:00" "2003-05-07"  154  74 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  74  5.036952 1
    "2003-05-07 13:14:00" "2003-05-07"  172  75 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  75  5.147494 1
    "2003-05-07 13:15:00" "2003-05-07"  250  76 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  76  5.521461 1
    "2003-05-07 13:16:00" "2003-05-07"  398  77 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  77  5.986452 1
    "2003-05-07 13:17:00" "2003-05-07"   52  78 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  78 3.9512436 1
    "2003-05-07 13:18:00" "2003-05-07"   38  79 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  79  3.637586 1
    "2003-05-07 13:19:00" "2003-05-07"   70  80 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  80  4.248495 1
    "2003-05-07 13:20:00" "2003-05-07"    8  81 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  81 2.0794415 1
    "2003-05-07 13:21:00" "2003-05-07"  517  82 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  82  6.248043 1
    "2003-05-07 13:22:00" "2003-05-07"   14  83 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  83 2.6390574 1
    "2003-05-07 13:23:00" "2003-05-07"  178  84 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  84  5.181784 1
    "2003-05-07 13:24:00" "2003-05-07"  250  85 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  85  5.521461 1
    "2003-05-07 13:25:00" "2003-05-07"   67  86 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  86  4.204693 1
    "2003-05-07 13:26:00" "2003-05-07"  190  87 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  87  5.247024 1
    "2003-05-07 13:27:00" "2003-05-07"  296  88 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  88   5.69036 1
    "2003-05-07 13:28:00" "2003-05-07"  190  89 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  89  5.247024 1
    "2003-05-07 13:29:00" "2003-05-07"  184  90 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  90  5.214936 1
    "2003-05-07 13:30:00" "2003-05-07"  154  91 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  91  5.036952 1
    "2003-05-07 13:31:00" "2003-05-07"  349  92 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  92  5.855072 1
    "2003-05-07 13:32:00" "2003-05-07"  197  93 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  93  5.283204 1
    "2003-05-07 13:33:00" "2003-05-07"  286  94 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  94  5.655992 1
    "2003-05-07 13:34:00" "2003-05-07"  197  95 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  95  5.283204 1
    "2003-05-07 13:35:00" "2003-05-07"  242  96 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  96  5.488938 1
    "2003-05-07 13:36:00" "2003-05-07"  689  97 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  97  6.535241 1
    "2003-05-07 13:37:00" "2003-05-07"  166  98 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  98  5.111988 1
    "2003-05-07 13:38:00" "2003-05-07"   32  99 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2  99  3.465736 1
    "2003-05-07 13:39:00" "2003-05-07"  398 100 1 "condition_1" 2 2 1 2 19 19 37 8 19 0 2 100  5.986452 1
    end

  • #2
    Anshika:
    With xtreg, I find reasonable values:

    . xtreg ln_act, re
    Your code should have been:
    Code:
    . xtreg ln_act, mle
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      Originally posted by Carlo Lazzaro View Post
      Anshika:


      Your code should have been:
      Code:
      . xtreg ln_act, mle
      this is thew ouput:


      . xtreg ln_act, mle
      Iteration 0: Log likelihood = -2905027.6
      Iteration 1: Log likelihood = -2905027.6
      Iteration 2: Log likelihood = -2905027.6

      Random-effects ML regression Number of obs = 1,215,378
      Group variable: group_name1 Number of groups = 55

      Random effects u_i ~ Gaussian Obs per group:
      min = 16,680
      avg = 22,097.8
      max = 31,473

      Wald chi2(0) = 0.00
      Log likelihood = -2905027.6 Prob > chi2 = .

      ------------------------------------------------------------------------------
      ln_act | Coefficient Std. err. z P>|z| [95% conf. interval]
      -------------+----------------------------------------------------------------
      _cons | 3.142652 .0762667 41.21 0.000 2.993172 3.292132
      -------------+----------------------------------------------------------------
      /sigma_u | .5653279 .0539563 .4688777 .6816182
      /sigma_e | 2.640928 .0016939 2.637611 2.644251
      rho | .0438156 .0079975 .0302598 .0618941
      ------------------------------------------------------------------------------
      LR test of sigma_u=0: chibar2(01) = 5.3e+04 Prob >= chibar2 = 0.000


      I am still unable to wrap my head around non-convergence when using mixed though. I need to integrate random slopes and higher level random intercepts.

      thanks!

      Comment


      • #4
        Anshika:
        the usual advice in cases like yours, is to start with a more parsimonious model, add one more predictor at a time and see when convergences issues creep up.
        Kind regards,
        Carlo
        (Stata 19.0)

        Comment


        • #5
          Originally posted by Carlo Lazzaro View Post
          Anshika:
          the usual advice in cases like yours, is to start with a more parsimonious model, add one more predictor at a time and see when convergences issues creep up.
          that makes sense. However, is this not the simplest model possible:

          mixed ln_act || group_name:, mle ?

          here I have not added any predictors yet, I still encounter convergence issues.

          Comment


          • #6
            It is very hard to know what is going on. The data you provided has only one group/group_name/group_name1. So we cannot use the example data to test any of these models. The ICC (rho in the xtreg output you show) is quite low, .044. That doesn't mean that you are absolved of dealing with dependency in the data, but it may mean that you do not need to use mixed for these purposes. Instead you may choose to run the models using OLS with cluster robust standard errors (vce(cluster group_name1)). Note that the ICC can increase when you include predictors (see this post). So you might have more between cluster variability than you show here.

            Comment


            • #7
              Originally posted by Erik Ruzek View Post
              It is very hard to know what is going on. The data you provided has only one group/group_name/group_name1. So we cannot use the example data to test any of these models. The ICC (rho in the xtreg output you show) is quite low, .044. That doesn't mean that you are absolved of dealing with dependency in the data, but it may mean that you do not need to use mixed for these purposes. Instead you may choose to run the models using OLS with cluster robust standard errors (vce(cluster group_name1)). Note that the ICC can increase when you include predictors (see this post). So you might have more between cluster variability than you show here.
              Hello, thank you for your response! so there are 55 distinct values under group_name. I've only displayed the first 100 rows.

              Here is my dataset: https://www.kaggle.com/datasets/jhamb21/depression
              Last edited by anshika jhamb; 25 Apr 2024, 09:37.

              Comment


              • #8
                I don't see anything about your dataset in #7. It is blank. Were you trying to link to your dataset?

                Comment


                • #9
                  Originally posted by Erik Ruzek View Post
                  I don't see anything about your dataset in #7. It is blank. Were you trying to link to your dataset?
                  apologies! here it is: https://www.kaggle.com/datasets/jhamb21/depression

                  Comment


                  • #10
                    Originally posted by Erik Ruzek View Post
                    I don't see anything about your dataset in #7. It is blank. Were you trying to link to your dataset?
                    Hi Erik,

                    I am unsure if this is the correct forum, but I also tried coding a simple 2 level random intercept using the dataset in R. I was able to run the model, however, my two levels (individuals and days nested within individuals) could hardly explain much variance:
                    Random effects: Groups Name Variance Std.Dev.
                    day:number (Intercept) 8377 91.52
                    number (Intercept) 4502 67.10
                    Residual 133733 365.70
                    Number of obs: 1215378, groups: day:number, 878; number, 55




                    However, if i include random slopes I encounter convergence issues yet again. When I see the summary of model I notice the variance explained by random slope is close to 0 (perhaps that is why convergence issue?) Moroever, When testing for assumptions, the simple random intercept model does not meet homoscedasticity assumptions. Can I conclude that my model does not adequately explain variance in the data? even my residuals are not normally distrubuted. Would be grateful if someone could help me out!!
                    Last edited by anshika jhamb; 28 Apr 2024, 06:24.

                    Comment


                    • #11
                      It is very hard to say, unfortunately. It looks like almost all the variance in your model is at the residual level. So there simply is not much variance to be explained at the level of the groups. Very likely you are asking more from your data than it can provide and you are running up against computing constraints with over 1.2 million observations.

                      Comment


                      • #12
                        Originally posted by anshika jhamb View Post
                        . . . my two levels (individuals and days nested within individuals) could hardly explain much variance:
                        I've been under the impression that when time is included in a mixed model, it's not considered nested within individual. I recall seeing econometric panel models where days have been cross-classified with stock ticker or whatever, but nested under?

                        . . . if i include random slopes I encounter convergence issues yet again.
                        Looking at the timecourse shown in that Kaggle site for one of the patients, it strikes me as futile to try to include a random slope for days in a mixed model.

                        You mention heteroscedasticity and nonnormal distribution of residuals in this latter model. Its variance components are much larger in magnitude than those in the model that you show above in #1. I gather in this latter model you didn't first take the logarithms of the activity values.

                        You don't mention what your research objective is with these data, but with only 55 total participants and long Ts (here it's 878 days, but above in #1 it was between 16 680 and 31 473), you might want to look into what approaches economists favor for panel data that have T >> N.

                        Comment


                        • #13
                          Anshika:
                          elaborating on Joseph Coveney ' s assist, I would point you out to -xtregar- and -xtgls-.
                          Kind regards,
                          Carlo
                          (Stata 19.0)

                          Comment


                          • #14
                            Originally posted by Joseph Coveney View Post
                            I've been under the impression that when time is included in a mixed model, it's not considered nested within individual. I recall seeing econometric panel models where days have been cross-classified with stock ticker or whatever, but nested under?

                            Looking at the timecourse shown in that Kaggle site for one of the patients, it strikes me as futile to try to include a random slope for days in a mixed model.

                            You mention heteroscedasticity and nonnormal distribution of residuals in this latter model. Its variance components are much larger in magnitude than those in the model that you show above in #1. I gather in this latter model you didn't first take the logarithms of the activity values.

                            You don't mention what your research objective is with these data, but with only 55 total participants and long Ts (here it's 878 days, but above in #1 it was between 16 680 and 31 473), you might want to look into what approaches economists favor for panel data that have T >> N.

                            Thank you for your responses Joseph, Erik and Carlo! I’m new to mixed effects therefore forgive me should I misinterpret concepts! Here’s why I selected nested over crossed:




                            Every individual starts on a different date and ends on a different one. Therefore, they do not have (m)any days/dates in common. Wouldn’t crossed effect of day entail that these subjects are sharing a specific day?




                            Secondly, thank you! Yes, random slopes for days doesn’t make sense. Is that why convergence issue was occurring?




                            Thirdly, I wished to ask — can negative binomial distribution be implemented on continuous outcome variables? actigraph can be considered count data, can it not? moreover, the activity distribution (outcome) is severely right-skewed with lots of 0 values.
                            Last edited by anshika jhamb; 29 Apr 2024, 00:47.

                            Comment

                            Working...
                            X