Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bounded 'life satisfaction' dependent variable

    Hi there, I have a question that I hope will be simple for someone to answer.

    I am working with panel data (2011-2016) at local authority (think of it as a different city) level within UK. My dependent variable is life satisfaction which is measured by asking the following question: Overall, how satisfied are you with your life, where 0 is 'not at all satisfied' and 10 is 'completely satisfied'. The life satisfaction score in my data set is an aggregated mean of the entire household for each local authority and most of my data is within the range of 7-8 value. Therefore, the dependent variable is a bounded scaled, which means that the maximum the life satisfaction can be is 10 and over time might not have a visible trend. I will attach a screenshot of how my dataset looks like..My main independent variable would be gross added value per head. With other independent variables being unemployment, life expectancy, obesity prevalence, Utilisation of outdoor space for exercise, violent crime and social isolation. I ran a –xtreg- command, for which the results were insignificant between my dependent and main independent variables. I think that I should address the problem that my dependent variable is bounded.

    What statistical method for regression would be best to employ in such a case?
    Attached Files

  • #2
    Aishwarya, it's best not to attach screenshots, as specified in the FAQ. They're not always readable. It's better to attach an extract using -dataex- (which is part of Stata 14.2 or 15.1, or you can download from SSC).

    That said, I can see your screenshot. You look like you have region-year data, and you have the average life satisfaction score in each region.

    Personally, I have had data like this before for a single year, and we used OLS. You should check the distribution of scores, but all the scores in the sample you show are quite far away from either of the bounds. Also, when you take the region-level mean, most of the variation is between individuals, and you've likely taken a lot of that variation out. I'm not that surprised you have few significant associations; we had few significant coefficients as well when we did this.

    If you think you can meet the assumptions of Tobit regression, or censored normal regression, then you could use the -xttobit- command, specifying the lower and upper limits as 0 and 10. However, it's not really true that your dependent variable is censored at 10, and that it could continue upwards. It's more that you had ordinal data on individuals, but someone took a group mean. Also, if the distribution of regional mean life satisfaction scores is pretty tightly clustered and there are few near the upper or lower limits, you shouldn't see many differences between a Tobit and xtreg specification.
    Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

    When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

    Comment


    • #3
      Originally posted by Weiwen Ng View Post
      Aishwarya, it's best not to attach screenshots, as specified in the FAQ. They're not always readable. It's better to attach an extract using -dataex- (which is part of Stata 14.2 or 15.1, or you can download from SSC).

      That said, I can see your screenshot. You look like you have region-year data, and you have the average life satisfaction score in each region.

      Personally, I have had data like this before for a single year, and we used OLS. You should check the distribution of scores, but all the scores in the sample you show are quite far away from either of the bounds. Also, when you take the region-level mean, most of the variation is between individuals, and you've likely taken a lot of that variation out. I'm not that surprised you have few significant associations; we had few significant coefficients as well when we did this.

      If you think you can meet the assumptions of Tobit regression, or censored normal regression, then you could use the -xttobit- command, specifying the lower and upper limits as 0 and 10. However, it's not really true that your dependent variable is censored at 10, and that it could continue upwards. It's more that you had ordinal data on individuals, but someone took a group mean. Also, if the distribution of regional mean life satisfaction scores is pretty tightly clustered and there are few near the upper or lower limits, you shouldn't see many differences between a Tobit and xtreg specification.
      I would like to mention that it cannot go above 10, because the maximum someone can rate their satisfaction is 10 because of the way it is measured. so suppose if in 2011 someone has a satisfaction of 10, the next year the person cannot write a 11 because there is no 11.. that it the reason why I mentioned that there will be no trend. Despite my independent variable (GVA per head) having a trend over time..

      Comment


      • #4
        Originally posted by Aishwarya Gupta View Post

        I would like to mention that it cannot go above 10, because the maximum someone can rate their satisfaction is 10 because of the way it is measured. so suppose if in 2011 someone has a satisfaction of 10, the next year the person cannot write a 11 because there is no 11.. that it the reason why I mentioned that there will be no trend. Despite my independent variable (GVA per head) having a trend over time..
        That's what I was thinking as well.

        It's also worth noting that a lot of individual respondents would have to give answers of 10 in order for this ceiling effect (i.e. many people answering at the maximum value of a scale) to affect your regional mean scores.

        Furthermore, you don't actually have individual data. You have regional data, and as I mentioned, the sample you gave us has most regions scoring around 7.5. If a lot of regions are scoring around 9, then we should talk, but I don't think a Tobit regression will rescue you either, because you don't have access to the individual scores. I suspect you have what you have, and there is no statistical fix.
        Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

        When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

        Comment


        • #5
          Pragmatically, if you don't have a meaningful number of observations at the boundary (0 or 10), there won't be much difference between ols and tobit. If you have no 0's or 10's then OLS and Tobit should be identical or almost identical.

          While the underlying data are ordinal, to aggregate them the way you do you must assume they are more than ordinal (interval or ratio - I'm not sure whether you can get away with interval or must assume ratio). Strictly speaking, you can't do arithmetic meaningfully with ordinal data. However, this is done with great regularity anytime someone aggregates such scales so I would be surprised if anyone will object.

          Comment


          • #6
            Originally posted by Weiwen Ng View Post
            Aishwarya, it's best not to attach screenshots, as specified in the FAQ. They're not always readable. It's better to attach an extract using -dataex- (which is part of Stata 14.2 or 15.1, or you can download from SSC).

            That said, I can see your screenshot. You look like you have region-year data, and you have the average life satisfaction score in each region.

            Personally, I have had data like this before for a single year, and we used OLS. You should check the distribution of scores, but all the scores in the sample you show are quite far away from either of the bounds. Also, when you take the region-level mean, most of the variation is between individuals, and you've likely taken a lot of that variation out. I'm not that surprised you have few significant associations; we had few significant coefficients as well when we did this.

            If you think you can meet the assumptions of Tobit regression, or censored normal regression, then you could use the -xttobit- command, specifying the lower and upper limits as 0 and 10. However, it's not really true that your dependent variable is censored at 10, and that it could continue upwards. It's more that you had ordinal data on individuals, but someone took a group mean. Also, if the distribution of regional mean life satisfaction scores is pretty tightly clustered and there are few near the upper or lower limits, you shouldn't see many differences between a Tobit and xtreg specification.
            Hello, These are the results I have got using Stata. I used to -xttobit- model.. Whats your take on it??
            Attached Files

            Comment


            • #7
              Aishwarya,

              Please present the full command and results using the code delimiters - as I mentioned earlier regarding data, we prefer that users do not post screenshots.

              I mentioned the Tobit model for explanatory purposes, but sometimes I get carried away explaining things. I did state, and Phil did concur, that if most of your region-year observations aren't at 10, then there won't be a meaningful difference between Tobit and OLS, or the -xt- versions of each command. I reiterate that statement. Most papers in my collection that analyze changes in scale scores that are bounded like yours use a linear model (e.g. -xtreg- or -mixed-, or the equivalent in whatever programming language was used).

              If I didn't convey things clearly enough, I think that -xtreg- is sufficient for this purpose. Some collaborators and I are publishing a paper where we applied OLS (cross sectional data) to average satisfaction ratings for nursing facilities, and one of them is an economist. To be clear, the model isn't perfect, but it suffices.

              You earlier said that -xtreg- gave you a non-significant coefficient on your main explanatory variable. If you got major differences in your -xttobit- regression, and you did specify the correct upper and lower limits, and now the main explanatory variable has a significant coefficient, then if I were a reviewer, I would be surprised, and I would want to see the -xtreg- equivalent output, and the number of censored observations. To be clear, I don't think that Tobit is the right model for this problem, for reasons you stated in post #3.
              Be aware that it can be very hard to answer a question without sample data. You can use the dataex command for this. Type help dataex at the command line.

              When presenting code or results, please use the code delimiters format them. Use the # button on the formatting toolbar, between the " (double quote) and <> buttons.

              Comment


              • #8
                Originally posted by Weiwen Ng View Post
                Aishwarya,

                Please present the full command and results using the code delimiters - as I mentioned earlier regarding data, we prefer that users do not post screenshots.

                I mentioned the Tobit model for explanatory purposes, but sometimes I get carried away explaining things. I did state, and Phil did concur, that if most of your region-year observations aren't at 10, then there won't be a meaningful difference between Tobit and OLS, or the -xt- versions of each command. I reiterate that statement. Most papers in my collection that analyze changes in scale scores that are bounded like yours use a linear model (e.g. -xtreg- or -mixed-, or the equivalent in whatever programming language was used).

                If I didn't convey things clearly enough, I think that -xtreg- is sufficient for this purpose. Some collaborators and I are publishing a paper where we applied OLS (cross sectional data) to average satisfaction ratings for nursing facilities, and one of them is an economist. To be clear, the model isn't perfect, but it suffices.

                You earlier said that -xtreg- gave you a non-significant coefficient on your main explanatory variable. If you got major differences in your -xttobit- regression, and you did specify the correct upper and lower limits, and now the main explanatory variable has a significant coefficient, then if I were a reviewer, I would be surprised, and I would want to see the -xtreg- equivalent output, and the number of censored observations. To be clear, I don't think that Tobit is the right model for this problem, for reasons you stated in post #3.
                hello

                thank you for your reply// i would like to share my ols and gls results..along with my data but for that I have a query..!!

                sorry I am kind of new with stata so excuse me if I made the same mistake again but I used #delimit and also tried #delimiter.. it says unknown command..
                please explain me again how can I get the results off stata without taking screenshots.. I tried to find on google (but no success with that)

                Comment


                • #9
                  I think you misunderstand. CODE delimiters are not a feature of Stata, they are a feature of the forum software that runs Statalist.

                  To assure maximum readability of results that you post, please copy them from the Results window or your log file into a code block in the Forum editor, as explained in section 12 of the Statalist FAQ linked to at the top of the page. For example, the following:

                  [CODE]
                  . sysuse auto, clear
                  (1978 Automobile Data)

                  . describe make price

                  storage display value
                  variable name type format label variable label
                  -----------------------------------------------------------------
                  make str18 %-18s Make and Model
                  price int %8.0gc Price
                  [/CODE]

                  will be presented in the post as the following:
                  Code:
                  . sysuse auto, clear
                  (1978 Automobile Data)
                  
                  . describe make price
                  
                                storage   display    value
                  variable name   type    format     label      variable label
                  -----------------------------------------------------------------
                  make            str18   %-18s                 Make and Model
                  price           int     %8.0gc                Price
                  You can type the [CODE] and [/CODE] yourself, or clicking the "#" icon on the forum's Advanced Editor toolbar will insert them for you. You then select your results in your Stata Results window and copy them to the clipboard, then paste the contents of the clipboard between the CODE delimiters.

                  Comment


                  • #10
                    My OLS Results:
                    Code:
                    . regress happinessindex lgva unemployementrateaged16 mentalhealth obesityqofprevalence17 smokingattributable
                    > deathsfromhea violentcrimeincludingsexualviole socialisolation lifeexpectancypca
                    
                          Source |       SS           df       MS      Number of obs   =       500
                    -------------+----------------------------------   F(8, 491)       =     48.67
                           Model |  6.25298519         8  .781623148   Prob > F        =    0.0000
                        Residual |  7.88592196       491  .016060941   R-squared       =    0.4423
                    -------------+----------------------------------   Adj R-squared   =    0.4332
                           Total |  14.1389071       499  .028334483   Root MSE        =    .12673
                    
                    --------------------------------------------------------------------------------------------------
                                      happinessindex |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                    ---------------------------------+----------------------------------------------------------------
                                                lgva |   .0406153   .0168767     2.41   0.016     .0074558    .0737748
                             unemployementrateaged16 |  -.0226846   .0027359    -8.29   0.000    -.0280602   -.0173091
                                        mentalhealth |  -.2176063   .0351464    -6.19   0.000    -.2866622   -.1485503
                              obesityqofprevalence17 |   .0382862   .0063163     6.06   0.000     .0258759    .0506965
                    smokingattributabledeathsfromhea |  -.0057882   .0008938    -6.48   0.000    -.0075444   -.0040321
                    violentcrimeincludingsexualviole |   .0026786   .0010592     2.53   0.012     .0005975    .0047597
                                     socialisolation |   .0036622   .0012284     2.98   0.003     .0012486    .0060758
                                   lifeexpectancypca |  -.0015486   .0022503    -0.69   0.492    -.0059699    .0028728
                                               _cons |   7.062978   .2098355    33.66   0.000     6.650692    7.475264
                    --------------------------------------------------------------------------------------------------
                    My Gls Results:
                    Code:
                     xtreg happinessindex lgva unemployementrateaged16 mentalhealth obesityqofprevalence17 smokingattributablede
                    > athsfromhea violentcrimeincludingsexualviole socialisolation lifeexpectancypca
                    
                    Random-effects GLS regression                   Number of obs     =        500
                    Group variable: nLAname                         Number of groups  =        100
                    
                    R-sq:                                           Obs per group:
                         within  = 0.3699                                         min =          5
                         between = 0.4887                                         avg =        5.0
                         overall = 0.4323                                         max =          5
                    
                                                                    Wald chi2(8)      =     322.52
                    corr(u_i, X)   = 0 (assumed)                    Prob > chi2       =     0.0000
                    
                    --------------------------------------------------------------------------------------------------
                                      happinessindex |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
                    ---------------------------------+----------------------------------------------------------------
                                                lgva |    .053726   .0254011     2.12   0.034     .0039408    .1035113
                             unemployementrateaged16 |  -.0168212   .0029602    -5.68   0.000     -.022623   -.0110193
                                        mentalhealth |  -.2248728   .0526844    -4.27   0.000    -.3281323   -.1216132
                              obesityqofprevalence17 |   .0457305   .0091431     5.00   0.000     .0278104    .0636507
                    smokingattributabledeathsfromhea |  -.0072916    .001115    -6.54   0.000    -.0094769   -.0051063
                    violentcrimeincludingsexualviole |   .0039391    .001243     3.17   0.002      .001503    .0063753
                                     socialisolation |   .0036549     .00125     2.92   0.003     .0012049    .0061048
                                   lifeexpectancypca |  -.0018097   .0035733    -0.51   0.613    -.0088132    .0051938
                                               _cons |   6.876862   .2955269    23.27   0.000      6.29764    7.456084
                    ---------------------------------+----------------------------------------------------------------
                                             sigma_u |  .07486468
                                             sigma_e |  .09925072
                                                 rho |  .36263771   (fraction of variance due to u_i)
                    --------------------------------------------------------------------------------------------------
                    My Tobit Results:

                    Code:
                    . xttobit happinessindex lgva unemployementrateaged16 mentalhealth obesityqofprevalence17 smokingattributable
                    > deathsfromhea violentcrimeincludingsexualviole socialisolation lifeexpectancypca
                    
                    Fitting comparison model:
                    
                    Fitting constant-only model:
                    
                    Iteration 0:   log likelihood =  181.94967  
                    Iteration 1:   log likelihood =  181.95017  
                    Iteration 2:   log likelihood =  181.95017  
                    
                    Fitting full model:
                    
                    Iteration 0:   log likelihood =  327.87198  
                    Iteration 1:   log likelihood =  327.91297  
                    Iteration 2:   log likelihood =  327.91297  
                    
                    Obtaining starting values for full model:
                    
                    Iteration 0:   log likelihood =  361.52137
                    Iteration 1:   log likelihood =  364.04804
                    Iteration 2:   log likelihood =  364.07196
                    Iteration 3:   log likelihood =  364.07197
                    
                    Fitting full model:
                    
                    Iteration 0:   log likelihood =  364.07197  
                    Iteration 1:   log likelihood =  364.07197  (backed up)
                    
                    Random-effects tobit regression                 Number of obs     =        500
                                                                       Uncensored     =        500
                    Limits: lower = -inf                               Left-censored  =          0
                            upper = +inf                               Right-censored =          0
                    
                    Group variable: nLAname                         Number of groups  =        100
                    Random effects u_i ~ Gaussian                   Obs per group:
                                                                                  min =          5
                                                                                  avg =        5.0
                                                                                  max =          5
                    
                    Integration method: mvaghermite                 Integration pts.  =         12
                    
                                                                    Wald chi2(8)      =     330.60
                    Log likelihood  =  364.07197                    Prob > chi2       =     0.0000
                    
                    --------------------------------------------------------------------------------------------------
                                      happinessindex |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
                    ---------------------------------+----------------------------------------------------------------
                                                lgva |   .0523589   .0246984     2.12   0.034     .0039508    .1007669
                             unemployementrateaged16 |  -.0171554   .0030147    -5.69   0.000    -.0230641   -.0112467
                                        mentalhealth |  -.2249888    .050897    -4.42   0.000    -.3247451   -.1252325
                              obesityqofprevalence17 |   .0450798   .0089681     5.03   0.000     .0275026     .062657
                    smokingattributabledeathsfromhea |   -.007221   .0011034    -6.54   0.000    -.0093836   -.0050585
                    violentcrimeincludingsexualviole |   .0038847   .0012265     3.17   0.002     .0014808    .0062886
                                     socialisolation |   .0036712   .0012383     2.96   0.003     .0012442    .0060981
                                   lifeexpectancypca |  -.0017946   .0034377    -0.52   0.602    -.0085323    .0049431
                                               _cons |   6.895143   .2890244    23.86   0.000     6.328666    7.461621
                    ---------------------------------+----------------------------------------------------------------
                                            /sigma_u |   .0737318   .0076861     9.59   0.000     .0586673    .0887963
                                            /sigma_e |   .1028686   .0036789    27.96   0.000      .095658    .1100792
                    ---------------------------------+----------------------------------------------------------------
                                                 rho |    .339385   .0520314                      .2442626    .4460577
                    --------------------------------------------------------------------------------------------------
                    LR test of sigma_u=0: chibar2(01) = 72.32              Prob >= chibar2 = 0.000

                    Comment


                    • #11
                      some data


                      Code:
                      * Example generated by -dataex-. To install: ssc install dataex
                      clear
                      input str28 laname int year float(happinessindex lgva lifeexpectancypca mentalhealth obesityqofprevalence17)
                      "Adur"                         2011 7.03  9.704854 -1.043456         .         .
                      "Adur"                         2012 7.27  9.699349 -.7600963         .         .
                      "Adur"                         2013 7.82  9.741909 -.7174443         .         .
                      "Adur"                         2014 7.54  9.723583 -.7547518         .         .
                      "Adur"                         2015 7.66  9.824012 -.8803704         .         .
                      "Adur"                         2016 7.32  9.820704 -.9179714         .         .
                      "Allerdale"                    2011 7.76  9.772182 -.1291303         .         .
                      "Allerdale"                    2012 7.59   9.81929  -.060408         .         .
                      "Allerdale"                    2013 7.71  9.834084  .0841847         .         .
                      "Allerdale"                    2014 8.07  9.869776  .0352304         .         .
                      "Allerdale"                    2015  7.7   9.89187  .0945577         .         .
                      "Allerdale"                    2016 7.62  9.891972  .0879996         .         .
                      "Amber Valley"                 2011 7.53  9.793506  .5499468         .         .
                      "Amber Valley"                 2012 7.67  9.811537  .6057476         .         .
                      "Amber Valley"                 2013 7.24  9.896009  .7001123         .         .
                      "Amber Valley"                 2014 7.72  9.909321  .6572871         .         .
                      "Amber Valley"                 2015 7.92  9.944198  .5528244         .         .
                      "Amber Valley"                 2016 7.45  9.961379  .5522352         .         .
                      "Arun"                         2011  7.8  9.634824  .2426087         .         .
                      "Arun"                         2012 7.47  9.646141   .507467         .         .
                      "Arun"                         2013 7.46  9.677715  .4877989         .         .
                      "Arun"                         2014 7.67  9.654898  .4872389         .         .
                      "Arun"                         2015 7.43  9.684461  .3808864         .         .
                      "Arun"                         2016 7.45  9.694924  .5147263         .         .
                      "Ashfield"                     2011 7.13  9.716496  .8753946         .         .
                      "Ashfield"                     2012 6.83  9.739143  .9341763  .6721842         .
                      "Ashfield"                     2013 7.36  9.758288  .9503802  .6942243         .
                      "Ashfield"                     2014 7.65  9.784423  .9426482  .7184909         .
                      "Ashfield"                     2015 7.59  9.778661  1.061233  .7612133         .
                      "Ashfield"                     2016 7.43  9.791214   1.06779  .7481219         .
                      "Ashford"                      2011 7.52  9.955463  .2696319         .         .
                      "Ashford"                      2012 7.19  9.988288  .2702065  .6663187         .
                      "Ashford"                      2013 7.62  9.990674   .307113  .6297149         .
                      "Ashford"                      2014 7.65  9.992322  .1781358  .6550063         .
                      "Ashford"                      2015 7.35  9.984054  .1782066  .6703339         .
                      "Ashford"                      2016 7.79  10.03434   .177081   .684728         .
                      "Aylesbury Vale"               2011 7.46 10.032804 -.5429591         .         .
                      "Aylesbury Vale"               2012 7.41 10.090258 -.4340672  .6490626         .
                      "Aylesbury Vale"               2013 7.44  10.07828 -.2914338  .6831195         .
                      "Aylesbury Vale"               2014 7.38 10.121096 -.3791071  .7204379         .
                      "Aylesbury Vale"               2015 7.64 10.136225 -.2472862  .7320778         .
                      "Aylesbury Vale"               2016 7.62 10.143764 -.1354222  .7583389         .
                      "Babergh"                      2011 7.73  9.802064 -.7105495         .         .
                      "Babergh"                      2012 7.56   9.84241 -.6971641         .         .
                      "Babergh"                      2013 8.18  9.873029 -.5625032         .         .
                      "Babergh"                      2014 7.64  9.920885 -.6650317         .         .
                      "Babergh"                      2015 7.37  9.941265 -.5540916         .         .
                      "Babergh"                      2016 7.58  9.970024 -.4942269         .         .
                      "Barking and Dagenham"         2011 6.91   9.69048 -.5490144         .  6.175478
                      "Barking and Dagenham"         2012 7.05  9.718904 -.5177038  .7355168  6.418407
                      "Barking and Dagenham"         2013 7.16  9.762384 -.3908235  .7627139  6.829811
                      "Barking and Dagenham"         2014 7.26  9.819725 -.5252098  .7621386  7.272108
                      "Barking and Dagenham"         2015 7.45  9.825148 -.4835354  .7794468  7.301159
                      "Barking and Dagenham"         2016 7.38   9.85849 -.5375493  .7968847  7.521914
                      "Barnet"                       2011 7.26 10.041466 -.9420418         .  5.677123
                      "Barnet"                       2012 7.23 10.072175 -.7897509  .9531717  5.760311
                      "Barnet"                       2013 7.53  10.10602 -.6697531  .9803593   5.90438
                      "Barnet"                       2014 7.44   10.1793 -.7071853  .9983977  6.026084
                      "Barnet"                       2015 7.62  10.21042  -.641583 1.0038208  6.036459
                      "Barnet"                       2016 7.63  10.21314 -.6771554 1.0096059  6.121688
                      "Barnsley"                     2011 7.12  9.480749 -1.833977         .  6.375013
                      "Barnsley"                     2012 7.21 9.4976225 -1.566166  .7197176  6.588596
                      "Barnsley"                     2013 7.32  9.550734 -1.081337  .7260988  6.780453
                      "Barnsley"                     2014 7.37  9.567525 -1.344462  .7377647  6.915158
                      "Barnsley"                     2015 7.51  9.597098 -1.148816  .7492258  7.015146
                      "Barnsley"                     2016 7.46  9.620594 -1.097194  .7606456  7.203515
                      "Barrow-in-Furness"            2011 7.49  9.848926  .0757693         .         .
                      "Barrow-in-Furness"            2012  7.2  9.881088  .2252808         .         .
                      "Barrow-in-Furness"            2013 6.72  9.967072  .5457556         .         .
                      "Barrow-in-Furness"            2014 7.15 10.001793  .4825257         .         .
                      "Barrow-in-Furness"            2015 7.47 10.148705  .3355675         .         .
                      "Barrow-in-Furness"            2016 7.69 10.149996  .3518003         .         .
                      "Basildon"                     2011 7.09  9.947504 -.6765698         .         .
                      "Basildon"                     2012 7.07  9.974691 -.5476797         .         .
                      "Basildon"                     2013 7.31 10.000795 -.2588611         .         .
                      "Basildon"                     2014 7.52  9.995292 -.4439954         .         .
                      "Basildon"                     2015 7.61 10.026457 -.3318758         .         .
                      "Basildon"                     2016 7.49 10.030032 -.3681082         .         .
                      "Basingstoke and Deane"        2011 7.43 10.225498 -.0224801         .         .
                      "Basingstoke and Deane"        2012 7.36  10.21746  .2233884         .         .
                      "Basingstoke and Deane"        2013  7.3 10.317185  .3597228         .         .
                      "Basingstoke and Deane"        2014 7.38  10.32689  .3152677         .         .
                      "Basingstoke and Deane"        2015 7.57 10.291094  .2705517         .         .
                      "Basingstoke and Deane"        2016 7.93 10.292993  .3137482         .         .
                      "Bassetlaw"                    2011 7.35  9.792276  1.659294         .         .
                      "Bassetlaw"                    2012 7.44  9.798072  1.684273  .6897168         .
                      "Bassetlaw"                    2013 7.43  9.814328   1.71198  .7050007         .
                      "Bassetlaw"                    2014 7.82  9.859326   1.64843  .7070529         .
                      "Bassetlaw"                    2015 7.48  9.897167  1.680236  .7268098         .
                      "Bassetlaw"                    2016 7.73  9.925543  1.623864  .6904373         .
                      "Bath and North East Somerset" 2011  7.5  10.04594 -.9534478         . 4.3576202
                      "Bath and North East Somerset" 2012 7.54 10.086143 -.8892313  .8045235  4.441681
                      "Bath and North East Somerset" 2013 7.35 10.071203 -.6645631  .8074634 4.5945125
                      "Bath and North East Somerset" 2014 7.58  10.10924 -.7431815  .7983528  4.605993
                      "Bath and North East Somerset" 2015 7.57 10.115166 -.6231112  .8091845 4.5969257
                      "Bath and North East Somerset" 2016 7.47 10.150152 -.5976044  .8016892   4.75305
                      "Bedford"                      2011 7.16  9.942997 -1.125987         .  6.169249
                      "Bedford"                      2012    7  9.915959 -.9937379         .  6.330461
                      "Bedford"                      2013 7.53   9.96086 -.6886829         .  6.601557
                      "Bedford"                      2014 7.62  10.01122 -.6988265         .   6.77705
                      end

                      Comment


                      • #12
                        Originally posted by William Lisowski View Post
                        I think you misunderstand. CODE delimiters are not a feature of Stata, they are a feature of the forum software that runs Statalist.

                        To assure maximum readability of results that you post, please copy them from the Results window or your log file into a code block in the Forum editor, as explained in section 12 of the Statalist FAQ linked to at the top of the page. For example, the following:

                        [CODE]
                        . sysuse auto, clear
                        (1978 Automobile Data)

                        . describe make price

                        storage display value
                        variable name type format label variable label
                        -----------------------------------------------------------------
                        make str18 %-18s Make and Model
                        price int %8.0gc Price
                        [/CODE]

                        will be presented in the post as the following:
                        Code:
                        . sysuse auto, clear
                        (1978 Automobile Data)
                        
                        . describe make price
                        
                        storage display value
                        variable name type format label variable label
                        -----------------------------------------------------------------
                        make str18 %-18s Make and Model
                        price int %8.0gc Price
                        You can type the [CODE] and [/CODE] yourself, or clicking the "#" icon on the forum's Advanced Editor toolbar will insert them for you. You then select your results in your Stata Results window and copy them to the clipboard, then paste the contents of the clipboard between the CODE delimiters.
                        Thank you so much,, I have pasted all my results..

                        Comment


                        • #13
                          Originally posted by Weiwen Ng View Post
                          Aishwarya,

                          Please present the full command and results using the code delimiters - as I mentioned earlier regarding data, we prefer that users do not post screenshots.

                          I mentioned the Tobit model for explanatory purposes, but sometimes I get carried away explaining things. I did state, and Phil did concur, that if most of your region-year observations aren't at 10, then there won't be a meaningful difference between Tobit and OLS, or the -xt- versions of each command. I reiterate that statement. Most papers in my collection that analyze changes in scale scores that are bounded like yours use a linear model (e.g. -xtreg- or -mixed-, or the equivalent in whatever programming language was used).

                          If I didn't convey things clearly enough, I think that -xtreg- is sufficient for this purpose. Some collaborators and I are publishing a paper where we applied OLS (cross sectional data) to average satisfaction ratings for nursing facilities, and one of them is an economist. To be clear, the model isn't perfect, but it suffices.

                          You earlier said that -xtreg- gave you a non-significant coefficient on your main explanatory variable. If you got major differences in your -xttobit- regression, and you did specify the correct upper and lower limits, and now the main explanatory variable has a significant coefficient, then if I were a reviewer, I would be surprised, and I would want to see the -xtreg- equivalent output, and the number of censored observations. To be clear, I don't think that Tobit is the right model for this problem, for reasons you stated in post #3.
                          I have pasted all my results.. OLS, GLS, and tobit model and also the data.. what do u think about it now?

                          Comment


                          • #14
                            It seems, as already underlined, that the Yvar is the main issue. No matter the model shared above, the coefficients appear to be quite low. Statiscally significant as they are, there may be a concern in terms of relevance as well as effect size.
                            Best regards,

                            Marcos

                            Comment


                            • #15
                              Originally posted by Marcos Almeida View Post
                              It seems, as already underlined, that the Yvar is the main issue. No matter the model shared above, the coefficients appear to be quite low. Statiscally significant as they are, there may be a concern in terms of relevance as well as effect size.
                              Since it is an extremely divided data (i.e. country uk has been divided into 326 local authority).. I think because of low sample size the effect is small.. I don't expect the effect size to be large anyway since its a local authority level data and not country level itself... infact even at country level (given by other studies) the effect size is not relatively large..

                              Comment

                              Working...
                              X