Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Dear Statalisters,

    You have helped me previously with this question in this thread, but now I want to try out another way how to adress this analysis. I have a time variable (neu_qsif) and su variable (sales in standard units). I want to generate a new variable which shows only those brands whose sales are ranging into the downward phase or ented a new brand life cycle, like in the example in this dataex. Namely, sales in su should have fallen to at least 10% of the peak (=max su) in the second consecutive quarter after the peak or entered a new cycle. I also generated lagsu and lag2su variables for that. Time variable (neu_qsif) has always su data until 2014q4 and in some cases also until 2016q4. The new variable should just indicate if this is the case or not. Many thanks.

    Code:
    * Example generated by -dataex-. To install: ssc install dataex
    clear
    input str30(country thclass) str21 intname float(brand su lagsu lag2su neu_qsif)
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  6.34     .     . 172
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.82  6.34     . 173
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.47  8.82  6.34 174
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.72  9.47  8.82 175
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.93  8.72  9.47 176
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 13.79 12.93  8.72 177
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  13.9 13.79 12.93 178
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 15.06  13.9 13.79 179
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 13.12 15.06  13.9 180
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.47 13.12 15.06 181
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 10.65 12.47 13.12 182
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  11.2 10.65 12.47 183
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.42  11.2 10.65 184
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 13.67 12.42  11.2 185
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  7.87 13.67 12.42 186
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.01  7.87 13.67 187
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.45  9.01  7.87 188
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.12  9.45  9.01 189
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.42  9.12  9.45 190
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.43  8.42  9.12 191
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 10.01  9.43  8.42 192
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.05 10.01  9.43 193
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.27  8.05 10.01 194
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.08  8.27  8.05 195
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  7.51  9.08  8.27 196
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.13  7.51  9.08 197
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  6.14  8.13  7.51 198
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  6.87  6.14  8.13 199
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  7.94  6.87  6.14 200
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  9.52  7.94  6.87 201
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  8.11  9.52  7.94 202
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  7.21  8.11  9.52 203
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  3.93  7.21  8.11 204
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  3.43  3.93  7.21 205
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551   3.4  3.43  3.93 206
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  4.53   3.4  3.43 207
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  7.03  4.53   3.4 208
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551  6.68  7.03  4.53 209
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 10.71  6.68  7.03 210
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 14.51 10.71  6.68 211
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 14.08 14.51 10.71 212
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 14.22 14.08 14.51 213
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 13.38 14.22 14.08 214
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.99 13.38 14.22 215
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.89 12.99 13.38 216
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.18 12.89 12.99 217
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 12.77 12.18 12.89 218
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551 14.45 12.77 12.18 219
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     . 14.45 12.77 220
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     . 14.45 221
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 222
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 223
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 224
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 225
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 226
    "France" "M5B4  BISPHOSPH TUMOUR-RELATED" "PAMIDRONIC AC HOSI" 1551     .     .     . 227
    end
    format %tq neu_qsif

    Comment


    • #17
      Namely, sales in su should have fallen to at least 10% of the peak (=max su) in the second consecutive quarter after the peak or entered a new cycle.
      Can you clarify two things that I don't get?

      1. Do you really mean su should have fallen to 10% of its peak value or lower? Or do you mean su should have fallen by 10% to 90% of its peak value or lower? Or something else?

      2. How is a new cycle defined? I don't see any cycle variable in the data. Is it somehow defined in terms of the other variables? If so, in what way?

      Comment


      • #18
        Originally posted by Clyde Schechter View Post
        Can you clarify two things that I don't get?

        1. Do you really mean su should have fallen to 10% of its peak value or lower? Or do you mean su should have fallen by 10% to 90% of its peak value or lower? Or something else?

        2. How is a new cycle defined? I don't see any cycle variable in the data. Is it somehow defined in terms of the other variables? If so, in what way?
        Hello,

        1. I am sorry for my poor formulation of the question. Of course, su should have fallen BY 10% of its peak in the second consecutive quarter after the peak.
        2. You are right. There is no cycle variable in the data. And I have to give myself a thought on how to define that. In the analysis I count time (in years) needed to reach peak sales. In case if data enters a new cycle, I want to count time needed to reach peak until the first peak (15.06 in su variable in dataex) but not until the second one (if theoretically there would be a value more than 15.06 which is not the case here).

        Comment


        • #19
          Well, except for the "new cycle" issue which remains unresolved, I believe the following does what you want:

          Code:
          // IDENTIFY THE FIRST QUARTER WHERE su PEAKS
          gen byte missing_su = missing(su)
          //    SORT DATA WITH HIGHEST NON-MISSING SU FIRST
          gsort brand missing_su -su neu_qsif
          by brand: gen peak_quarter = neu_qsif[1]
          format peak_quarter %tq
          by brand: gen peak_su = su[1]
          
          //    NOW MARK ANY PERIOD OF 2 CONSECUTIVE QUARTERS WHERE
          //    su IS 10% BELOW PEAK, AFTER THE PEAK
          xtset brand neu_qsif
          gen byte downward_phase = neu_qsif-peak_quarter >= 2 & su < peak_su & L1.su < peak_su
          replace downward_phase = . if missing(su)

          Comment


          • #20
            Originally posted by Clyde Schechter View Post
            Well, except for the "new cycle" issue which remains unresolved, I believe the following does what you want:

            Code:
            // IDENTIFY THE FIRST QUARTER WHERE su PEAKS
            gen byte missing_su = missing(su)
            // SORT DATA WITH HIGHEST NON-MISSING SU FIRST
            gsort brand missing_su -su neu_qsif
            by brand: gen peak_quarter = neu_qsif[1]
            format peak_quarter %tq
            by brand: gen peak_su = su[1]
            
            // NOW MARK ANY PERIOD OF 2 CONSECUTIVE QUARTERS WHERE
            // su IS 10% BELOW PEAK, AFTER THE PEAK
            xtset brand neu_qsif
            gen byte downward_phase = neu_qsif-peak_quarter >= 2 & su < peak_su & L1.su < peak_su
            replace downward_phase = . if missing(su)
            Thank you very much for the help again!

            Comment


            • #21
              Dear Statalisters,

              I need to calculate 25th percentile, 75th percentile and mean of su for the first year after product launch (this could be from su305 to 1205 or from 606 until 307), then for the first 5 years and 10 years. I have data until the forth quarter 2016 (su1216). I know that it can be done with summarize, detail command but due to different launch dates I don't know how to calculate it for the first four su values without taking into account 0. Thank you very much for the help in advance.


              Code:
              * Example generated by -dataex-. To install: ssc install dataex
              clear
              input str7 country str30 thclass str18 intname str7 ldate long(su305 su605 su905 su1205 su306 su606 su906 su1206 su307 su607 su907 su1207)
              "Spain"  "L1G0  MAB ANTINEOPLASTICS"      "ERBITUX"            "1/2005" 2882 5436 5866 6594 7330  8146  8135 10030 11169 11819 12481 14638
              "Italy"  "R3D1  CORTICOIDS INHALANTS"     "FLUNISOLIDE RAN"    "1/2005"   29   26   18   29   29    19    12    19    27     8     1     0
              "Spain"  "L1X9  ALL OTH. ANTINEOPLASTICS" "LYSODREN"           "1/2005"   18   45   45   35   14    32     9    20    18    48    43    17
              "USA"    "L1B0  ANTIMETABOLITES"          "CLOLAR"             "1/2005" 1084  608   25  183  133    74   157   215   223   198   322   408
              "USA"    "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             "1/2006"    0    0    0    0 1870  2094  4148  5805  7098  8271  9379 10634
              "USA"    "L4X0  OTHER IMMUNOSUPPRESSANTS" "REVLIMID"           "1/2006"    0    0    0    0 4738  5260 16123 21166 34064 37591 39956 42906
              "USA"    "L1B0  ANTIMETABOLITES"          "ARRANON"            "1/2006"    0    0    0    0 1085   904   816  1035  1234  1276  1022  1412
              "USA"    "A10K2 GLITAZONE & S-UREA COMBS" "AVANDARYL"          "1/2006"    0    0    0    0 9361 13168 18792 22594 25499 24486 17689 15797
              "Spain"  "L1H0  PROTEIN KINASE INH A-NEO" "SUTENT"             "1/2007"    0    0    0    0    0    58   116   129  2235  5060  6153  7111
              "Italy"  "R3D1  CORTICOIDS INHALANTS"     "LUNIS"              "1/2007"    0    0    0    0    0     0     0     0   188   433   338  1118
              "France" "L1H0  PROTEIN KINASE INH A-NEO" "SUTENT"             "1/2007"    0    0    0    0   12  1417  3699  5914 10609 13036 14279 15775
              "France" "A10C1 H INSUL+ANG FAST ACT"     "INSUPLANT"          "1/2007"    0    0    0    0    0     0     0     0     0     0     0     0
              "Spain"  "L1H0  PROTEIN KINASE INH A-NEO" "NEXAVAR"            "1/2007"    0    0    0    0    0     0    43    22   400  1125  1997  2871
              "Japan"  "L1B0  ANTIMETABOLITES"          "ALIMTA"             "1/2007"    0    0    0    0    0     0     0     0  1206  2508  2321  2401
              "Spain"  "L2B2  CYTO ANTI-ANDROGENS"      "BICALUTAMIDE  STAD" "1/2007"    0    0    0    0    0     0     0     0   120   265   315   273
              end
              Last edited by Natalia Remel; 02 May 2017, 11:59.

              Comment


              • #22
                Here is again the same question but another table with a formatted ldate (launch date). I was taught here in forum how to format time variables.. I decided to repost an example.

                Code:
                * Example generated by -dataex-. To install: ssc install dataex
                clear
                input str7 country str30 thclass str18 intname float ldate int su305 long(su605 su905 su1205 su306 su606 su906 su1206 su307 su607 su907 su1207)
                "USA"     "A10K2 GLITAZONE & S-UREA COMBS" "AVANDARYL"          552 0 0 0 0 9361 13168 18792 22594 25499 24486 17689 15797
                "USA"     "L4X0  OTHER IMMUNOSUPPRESSANTS" "REVLIMID"           552 0 0 0 0 4738  5260 16123 21166 34064 37591 39956 42906
                "USA"     "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             552 0 0 0 0 1870  2094  4148  5805  7098  8271  9379 10634
                "USA"     "L1B0  ANTIMETABOLITES"          "ARRANON"            552 0 0 0 0 1085   904   816  1035  1234  1276  1022  1412
                "Italy"   "R3D1  CORTICOIDS INHALANTS"     "CHARLYN"            553 0 0 0 0    9    42    30    90   139    88    51   168
                "Germany" "L4X0  OTHER IMMUNOSUPPRESSANTS" "IMUREL"             553 0 0 0 0    0     0     0     0     0     0     0     0
                "USA"     "L2B2  CYTO ANTI-ANDROGENS"      "FLUTAMIDE     ENDN" 553 0 0 0 0  104   398   554   502   520   521   489   396
                "USA"     "L1H0  PROTEIN KINASE INH A-NEO" "SUTENT"             553 0 0 0 0 9079 29167 41548 52283 54116 57013 59425 59777
                "Italy"   "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             553 0 0 0 0   80   296   424   616   809  1031  1123  1560
                "UK"      "R3D1  CORTICOIDS INHALANTS"     "BUDESONIDE    ORIN" 554 0 0 0 0    5    25    53    54    59    74    93   122
                "UK"      "R3D1  CORTICOIDS INHALANTS"     "BUDESONIDE    ALER" 554 0 0 0 0    0     0     0     0     0     0     1     1
                "Spain"   "L1H0  PROTEIN KINASE INH A-NEO" "TARCEVA"            554 0 0 0 0  600  3986  5721  7450  7607  8643  9338 11002
                "France"  "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             554 0 0 0 0    4    62   169   316   477   654   859  1147
                "USA"     "A10C5 H INSUL+ANG LONG ACT"     "LEVEMIR"            554 0 0 0 0 4412  4450  8404 12145 15551 20356 24044 30262
                end
                format %tm ldate

                Comment


                • #23
                  I'm not sure I understand exactly what you want. The following code assumes that you want the 25th, 50th, and 75th percentiles of su during the following periods: the first calendar year after the launch year (so 2007 when the launch year was 2006), the first 5 calendar years after the launch year (2007 through 2011), and the first ten calendar years after the launch year (2007 through 2016). I also assume that this is to be done separately for each combination of country and intname.

                  The first problem to overcome is the wide layout of the data and the encoding of time periods in the variable names. Like most things in Stata this is best done in long layout (in fact, in this case I do believe it is impossible in wide layout). Then we have to convert things like 305 to March 2005 as a Stata internal date. Then we have to extract the year from those dates. Finally, -egen, pctile()- brings us home.

                  Code:
                  * Example generated by -dataex-. To install: ssc install dataex
                  clear
                  input str7 country str30 thclass str18 intname float ldate int su305 long(su605 su905 su1205 su306 su606 su906 su1206 su307 su607 su907 su1207)
                  "USA"     "A10K2 GLITAZONE & S-UREA COMBS" "AVANDARYL"          552 0 0 0 0 9361 13168 18792 22594 25499 24486 17689 15797
                  "USA"     "L4X0  OTHER IMMUNOSUPPRESSANTS" "REVLIMID"           552 0 0 0 0 4738  5260 16123 21166 34064 37591 39956 42906
                  "USA"     "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             552 0 0 0 0 1870  2094  4148  5805  7098  8271  9379 10634
                  "USA"     "L1B0  ANTIMETABOLITES"          "ARRANON"            552 0 0 0 0 1085   904   816  1035  1234  1276  1022  1412
                  "Italy"   "R3D1  CORTICOIDS INHALANTS"     "CHARLYN"            553 0 0 0 0    9    42    30    90   139    88    51   168
                  "Germany" "L4X0  OTHER IMMUNOSUPPRESSANTS" "IMUREL"             553 0 0 0 0    0     0     0     0     0     0     0     0
                  "USA"     "L2B2  CYTO ANTI-ANDROGENS"      "FLUTAMIDE     ENDN" 553 0 0 0 0  104   398   554   502   520   521   489   396
                  "USA"     "L1H0  PROTEIN KINASE INH A-NEO" "SUTENT"             553 0 0 0 0 9079 29167 41548 52283 54116 57013 59425 59777
                  "Italy"   "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             553 0 0 0 0   80   296   424   616   809  1031  1123  1560
                  "UK"      "R3D1  CORTICOIDS INHALANTS"     "BUDESONIDE    ORIN" 554 0 0 0 0    5    25    53    54    59    74    93   122
                  "UK"      "R3D1  CORTICOIDS INHALANTS"     "BUDESONIDE    ALER" 554 0 0 0 0    0     0     0     0     0     0     1     1
                  "Spain"   "L1H0  PROTEIN KINASE INH A-NEO" "TARCEVA"            554 0 0 0 0  600  3986  5721  7450  7607  8643  9338 11002
                  "France"  "A10C1 H INSUL+ANG FAST ACT"     "APIDRA"             554 0 0 0 0    4    62   169   316   477   654   859  1147
                  "USA"     "A10C5 H INSUL+ANG LONG ACT"     "LEVEMIR"            554 0 0 0 0 4412  4450  8404 12145 15551 20356 24044 30262
                  end
                  format %tm ldate
                  
                  //    GO LONG
                  reshape long su, i(country intname) j(my)
                  
                  //    DECODE THE ORIGINAL SU* VARIABLE NAMES INTO MONTHLY DATES
                  gen current_month = mofd(mdy(floor(my/100), 1, 2000+mod(my, 100)))
                  format current_month %tm
                  
                  //    CALCULATE THE YEAR OF LAUNCH AND CURRENT YEAR
                  generate lyear = yofd(dofm(ldate))
                  generate current_year = yofd(dofm(current_month))
                  
                  //    CALCULATE PERCENTILES
                  foreach n of numlist 25 50 75 {
                      by country intname, sort: egen year1_`n'_pctile ///
                          = pctile(cond(current_year == lyear+1, su, .)), p(`n')
                      by country intname: egen year5_`n'_pctile = ///
                          pctile(cond(inrange(current_year, lyear+1, lyear+5), su, .)), p(`n')
                      by country intname: egen year10_`n'_pctile = ///
                          pctile(cond(inrange(current_year, lyear+1, lyear+10), su, .)), p(`n')
                  }
                  Note that your example data, unlike your real data, only goes up to 2007, and the launch dates are all in 2006, so the five and ten year statistics come out the same as the one year statistic.

                  If I have misunderstood what you want the time periods to be, just modify the various cond(...) expressions according to your need.

                  Comment


                  • #24
                    I would like to say a huge thank you again! This is precisely what I wanted.

                    Originally posted by Clyde Schechter View Post

                    Note that your example data, unlike your real data, only goes up to 2007, and the launch dates are all in 2006, so the five and ten year statistics come out the same as the one year statistic.

                    If I have misunderstood what you want the time periods to be, just modify the various cond(...) expressions according to your need.

                    Comment


                    • #25
                      A theme in this thread worth noting is data, given in a wide layout, which needed to be transformed by the solver into long layout to solve the problem at hand. The advice about wide versus long layout given in #8 above, and again by Clyde just now, bears close attention.

                      Transforming the underlying datasets from wide layout to long layout and working with those datasets will leave you able to apply the techniques for working with long layout data to subsequent problems more easily and perhaps enable you to achieve quicker progress in your work.

                      Comment


                      • #26
                        Thank you for your advice. The last question I posted reffered not to the "main" dataset I work with (even though they look similar). I was not sure if I can solve it only with a long format. Thank you to Clyde again. The "main" dataset I work with is in a long format and I understood that I can apply many techniques with long layout data and I do so. Having worked a bit with stata I realized how great this software is and I am so glad that this forum exists because very often I am in a tight corner


                        Originally posted by William Lisowski View Post
                        A theme in this thread worth noting is data, given in a wide layout, which needed to be transformed by the solver into long layout to solve the problem at hand. The advice about wide versus long layout given in #8 above, and again by Clyde just now, bears close attention.

                        Transforming the underlying datasets from wide layout to long layout and working with those datasets will leave you able to apply the techniques for working with long layout data to subsequent problems more easily and perhaps enable you to achieve quicker progress in your work.

                        Comment

                        Working...
                        X