Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    If you want 1:1, then add k2k and then restrict sample to cem_matched.

    Comment


    • #17
      Well, assuming that any given firm can only appear in a single stock exchange (I have no idea if that is true), the modification of the end of the code along the lines you mentioned is not all that difficult, I think. I've never tried to write that code, so I'm not sure there isn't something I'm overlooking here, but that's my intuition. If the same firm can appear in multiple stock exchanges it gets much more complicated because you have to check the possibility of removing it in all of the stock exchanges once it is used by a match.

      That said, there is a different issue: memory use. When you run that -joinby- command, the resulting data set is a large multiple of the original data set. If you were to join all ten stock exchanges in one go, you would quickly blow through all available memory as the data set size explodes. So really what I would do if faced with that problem is I would do ten consecutive matches. That is, I would start with the first two stock exchanges and match them. Then I would match the third stock exchange to the first-second pairs. Then the fourth exchange to the results of that, and so on. But there are some difficult questions you need to decide here. What is the meaning of a caliper match? At present it is simple: the ratio of the two asset sizes must be between .75 and 1.25. But with a third exchange thrown into the mix, what is the requirement. After all if the second firm's size is 1.2 times the first, and the third's is 1.2 times the second's, then the third's is 1.44 times the first's. So is that allowable? Probably not. So at each step, the code for identifying acceptable match has to loop over all of the already matched exchanges to restrict the new incoming match to satisfy the caliper with all of them. I should also point out that the more exchanges you have, the harder and harder it gets to find firm's that meet these increasingly stringent requirements and you get more and more unmatched results.

      While matching is a powerful way to control for extraneous differences between samples that you want to compare to each other, it is, except in relatively simple cases, difficult to implement without replacement because finding suitable matches gets harder and harder. That's why you will generally find only relatively simple matching schemes used in the literature.

      Comment


      • #18
        Right yes this makes sense. Fortunately, I am able to filter my data out such that I do not use any cross-listed firms (i.e., those listed on more than one stock exchange) which should mean the data set with ten stock exchanges should only include firms listed exclusively on their domestic stock exchange.

        Unfortunately, given the sheer size of a stock exchange I fear the memory issue would come true for even five different exchanges, let alone ten or more. Building off your potential solution, would it be feasible to find an average size of the first match (of say firm 1 on the NYSE and firm 2 on Bombay SE) and then impose the the caliper match on for the next firm, e.g., firm 3 on LSE must have a ratio of assets between 0.75 and 1.25 relative to the average of the pair? Then the fourth firm on the Shanghai SE would have to be within the same ratio but relative to the average of the three firms already matched and so on and so forth (if that makes sense?). Or could this lead to potential robustness issues in the methodology, in that finding of a new match at each step (i.e., matching the first two firms then adding the third and then the fourth etc.) is technically a different method?

        Understood, I had a feeling that this may be the case as I had struggled to find existing literature that matched more than just pairs, let alone multiple such as ten.

        Comment


        • #19
          Well, the idea of averaging the previous sizes and applying a caliper around that will cause the center of the caliper to do a random walk. Which means that as the number of stock exchanges grows, the variance in the caliper center will grow. You will have most observations reasonably well matched (or unmatched) but there will be a penumbra of badly matched observations, and that penumbra will grow with the number of stock exchanges. Also, this implies that the earlier rounds of matching will be more stringent than the later ones, so the matching isn't really equivalently stringent across stock exchanges. So overall, I think this degrades the performance of the matching considerably. To really see how this plays out would require setting up a simulation of the process and seeing what kinds of matches it produces--but that's a time-consuming task that I'm not positioned to undertake currently. But my instinct is that this approach will either fail to produce satisfactory quality of matches. I can imagine various tweaks to the idea that would reduce that problem, but at the cost of leaving more observations unmatched.

          Comment


          • #20
            Yes, I take your point - the use of averages would indeed mean that the quality of matches fades for each additional firm added to the matched group. Just on a more experimental note, how many stock exchanges do you suspect could work without overloading STATAs memory? It does not surprise me that ten is too many, but could it be feasible that five works?

            Comment


            • #21
              It depends on how many firms and industries you have per stock exchange in each year. You can conserve memory to a great extent by breaking the data up into 1-year chunks and doing the match separately in each year, since you have agreed that match consistency across years is not required.

              Suppose that in some year there are I industries and N1i (i = 1,...,I) firms in each industry in one exchange and N2i (i = 1,...,I) in another. So the two exchange data sets for that year contain Sumi=1,...,I (N1i) and Sumi=1,...,I (N2i) observations, respectively. When you combine with -joinby-, the resulting data set contains Sumi=1,...,I(N1i*N2i) observations. That is the peak memory requirement for the task: everything after that removes observations, and that is what limits you for the first round of matching within the year. You end up with a smaller data set whose size depends on how many acceptable matches were found. So you will have now a data set with, probably the same industries, and N12i values that are <= max(N1i, N2i). That becomes the new "N1i" for the match with the third stock exchange (where N2i's role is taken over by N3i). And so on.

              I think the key point here is that basically, with each new stock exchange, you have a "fresh start" for I and N1 that is no worse than what you started with in the previous round. So the feasibility of the next round depends on whether the next stock exchange has Ni's large enough that the next Sumi=1,...,I(N1i*N2i) is too large for you to continue. I should emphasize that we are in the relatively optimistic situation only because consistency of firm matching across years is not required: if it were, you would not be able to do this one year at a time, and the situation would be considerably worse.

              I note also that to conserve memory, when attempting this you should carry in your data sets only the firm ID, stock exchange ID, industry code, year, and size variables. These are the only ones that play a role in the match--everything else about them should be discarded for the matching. You can -merge- that other information back in to the final multi-exchange match at the end.

              Again, though, I don't think this is really a practical approach for many stock exchanges because the matches will end up either being of poor quality or too hard to find, and thus non-existent. Certainly people do matched triplets. And I vaguely recall one study with matched quadruples. I have never seen a study using matched quintuples, though I can't be sure they don't exist. (Also, bear in mind that I don't work in finance or economics, so the literature I follow is different from what you do. I follow the epidemiologic and medical literature.)

              Comment


              • #22
                Ok, understood. Unfortunately, the financial literature appears to be similar to epidemiological and medical, in that I do not believe I have even come across a triple match and certainly not a quadruple. With that in mind, I will take on board your points regarding the memory conservation and see if breaking it up into one year chunks could indeed work. If not, then at the least your help with getting the matching sorted for two stock exchanges has been hugely helpful and educational; for the time being I would call my query satisfied, however I am sure I will be back on this forum again with another tasked which has stumped me. Thank you!

                Comment


                • #23
                  When you say trip match, do you mean a 1:3 match?

                  Comment


                  • #24
                    I believe that was the case in the thread that was originally linked here as part ofthe first response. However, for my case it is more so matching to for matched group, i.e., a triplet match would entail three firms listed on different stock exchanges, but within the same industry and of similar size (as per the 0.75 to 1.25 caliper).

                    Comment


                    • #25
                      Clyde Schechter I hope you are well, apologies for brining you back to this thread again. I just wanted to ask what the most effective/academically robust and accepted method would be to form triplets using a similar dataset as before (dataex below). You highlighted that averaging the previous sizes (i.e., those already paired) and applying a caliper around that will cause the center of the caliper to do a random walk, meaning that as the number of stock exchanges grows, the variance in the caliper center will grow, I wondered if calculating a geometric mean rather than a simple arithmetic average of the two sizes could limit the variance? If not, then any alternative robust method to form matched triplets would be hugely appreciated.

                      Code:
                      * Example generated by -dataex-. For more info, type help dataex
                      clear
                      input str6 GlobalCompanyKey int(DataDate DataYearFiscal StockExchangeCode) double AssetsTotal byte NAICS
                      "001004" 22066 2019 90      2079 42
                      "001004" 22431 2020 90    1539.7 42
                      "001004" 22796 2021 90    1573.9 42
                      "001004" 23161 2022 90    1833.1 42
                      "001004" 23527 2023 90      2770 42
                      "001075" 22280 2020 11 20020.421 22
                      "001075" 22645 2021 11 22003.222 22
                      "001075" 23010 2022 11 22723.405 22
                      "001075" 23375 2023 11 24661.153 22
                      "001078" 22280 2020 11     72548 33
                      "001078" 22645 2021 11     75196 33
                      "001078" 23010 2022 11     74438 33
                      "001078" 23375 2023 11     73214 33
                      "001186" 22280 2020 11  9614.755 21
                      "001186" 22645 2021 11 10186.776 21
                      "001186" 23010 2022 11 23494.808 21
                      "001186" 23375 2023 11 28684.949 21
                      "001209" 22188 2020 11   25168.5 32
                      "001209" 22553 2021 11   26859.2 32
                      "001209" 22918 2022 11   27192.6 32
                      "001209" 23283 2023 11   32002.5 32
                      "001230" 22280 2020 11     14046 48
                      "001230" 22645 2021 11     13951 48
                      "001230" 23010 2022 11     14186 48
                      "001230" 23375 2023 11     14613 48
                      "001254" 22280 2020 11    2900.6 48
                      "001254" 22645 2021 11    3693.1 48
                      "001254" 23010 2022 11      4330 48
                      "001254" 23375 2023 11    4294.6 48
                      "001257" 22280 2020 11  1404.138 53
                      "001257" 22645 2021 11  1391.965 53
                      "001257" 23010 2022 11  1397.776 53
                      "001257" 23375 2023 11   1403.68 53
                      "001380" 22280 2020 11     18821 21
                      "001380" 22645 2021 11     20515 21
                      "001380" 23010 2022 11     21695 21
                      "001380" 23375 2023 11     24007 21
                      "001393" 22005 2019 11 13438.024 53
                      "001393" 22370 2020 11 14651.606 53
                      "001393" 22735 2021 11 17299.581 53
                      "001393" 23100 2022 11 18124.648 53
                      "001393" 23466 2023 11 19058.758 53
                      "001410" 22219 2020 11    3776.9 56
                      "001410" 22584 2021 11    4436.2 56
                      "001410" 22949 2022 11    4868.9 56
                      "001410" 23314 2023 11    4933.7 56
                      "001545" 22280 2020 11   865.764 53
                      "001545" 22645 2021 11   770.569 53
                      "001545" 23010 2022 11  1197.479 53
                      "001545" 23375 2023 11  1023.484 53
                      "001585" 22280 2020 11   680.293 32
                      "001585" 22645 2021 11    694.16 32
                      "001585" 23010 2022 11   726.313 32
                      "001585" 23375 2023 11   767.548 32
                      "001598" 22280 2020 11 10357.483 33
                      "001598" 22645 2021 11 11898.187 33
                      "001598" 23010 2022 11  12431.12 33
                      "001598" 23375 2023 11 15023.533 33
                      "001613" 22280 2020 11   463.208 33
                      "001613" 22645 2021 11   485.632 33
                      "001613" 23010 2022 11   502.774 33
                      "001613" 23375 2023 11   565.654 33
                      "001618" 22400 2020 90    97.366 23
                      "001618" 22765 2021 90    94.917 23
                      "001618" 23130 2022 90   115.895 23
                      "001618" 23496 2023 90    122.83 23
                      "001661" 22280 2020 11  5503.428 21
                      "001661" 22645 2021 11  5525.364 21
                      "001661" 23010 2022 11  4729.854 21
                      "001661" 23375 2023 11  5277.965 21
                      "001706" 22158 2020 11   824.294 33
                      "001706" 22523 2021 11   820.247 33
                      "001706" 22888 2022 11   757.312 33
                      "001706" 23253 2023 11   762.597 33
                      "001712" 22280 2020 11   316.833 32
                      "001712" 22645 2021 11    293.54 32
                      "001722" 22280 2020 90     49719 31
                      "001722" 22645 2021 90     56136 31
                      "001722" 23010 2022 90     59774 31
                      "001722" 23375 2023 90     54631 31
                      "001773" 22280 2020 90 17053.911 42
                      "001773" 22645 2021 90  19535.54 42
                      "001773" 23010 2022 90 21763.182 42
                      "001773" 23375 2023 90 21726.168 42
                      "001794" 22188 2020 11      6877 32
                      "001794" 22553 2021 11      6612 32
                      "001794" 22918 2022 11      6213 32
                      "001794" 23283 2023 11      5939 32
                      "001864" 21945 2019 11   500.502 32
                      "001864" 22311 2020 11   479.345 32
                      "001864" 22676 2021 11   550.361 32
                      "001864" 23041 2022 11   579.579 32
                      "001864" 23406 2023 11   664.802 32
                      "001913" 22280 2020 11    6083.9 32
                      "001913" 22645 2021 11    7971.6 32
                      "001913" 23010 2022 11    7950.5 32
                      "001913" 23375 2023 11    8209.8 32
                      "001926" 21974 2019 11  1073.831 33
                      "001926" 22339 2020 11   996.442 33
                      "001926" 22704 2021 11  1133.028 33
                      end
                      format %tdnn/dd/CCYY DataDate
                      Many thanks,
                      Alex

                      Comment


                      • #26
                        For matched triplets, I would first form the matched pairs. Then I would use a slight modification of the existing code, to match these pairs to firms in the third exchange. The modification of the code would be to apply the caliper to all three pairwise size-comparisons. The modification would replace
                        Code:
                        gen delta = assets_120/assets_11
                        keep if inrange(delta, 0.75, 1.25)
                        replace delta = abs(log(delta))
                        with, assuming for illustration purposes that the number of the third exchange is 99:

                        Code:
                        gen delta1 = assets_99/assets_120
                        gen delta2 = assets_99/assets_11
                        keep if inrange(delta1, 0.75, 1.25) & inrange(delta2, 0.75, 1.25)
                        replace delta1 = abs(log(delta1))
                        replace delta2 = abs(log(delta2))
                        gen delta = max(delta1, delta2)
                        This will assure that only firms whose size fits the caliber to both of the firms already matched will be considered, and when the "best" one is chosen, it will be the best with respect to both of the already matched firms.

                        Note that this is a maximally stringent way to apply the caliper. With three exchanges and a large data set, this is viable. But if you start adding more exchanges after that, the stringency will eventually end up leaving too many unmatchable observations as the bar of acceptability keeps getting higher and higher.


                        Comment


                        • #27
                          Understood, I made the above change and got an error stating that "assets_99 not found". In order to try and remedy this I attempted to separate into three data sets (as the original code splits into two), however the error then is "nothing to restore". What would be the best way to seperate into three datasets given the bellow is not working? (For clarity dataset has the third exchange code as 90, hence assets_90 etc. in the code).

                          // SEPARATE INTO THREE DATA SETS
                          preserve
                          keep if StockExchangeCode == 11
                          rename (GlobalCompanyKey AssetsTotal) =_11 // SUFFIX IN RENAME MUST MATCH STOCK EXCHANGE
                          drop StockExchangeCode
                          tempfile SE11 // DECLARATION IN TEMPFILE MUST MATCH SUBSEQUENT USE OF THE FILE
                          save `SE11' // N.B. NO .dta

                          restore
                          keep if StockExchangeCode == 120
                          rename (GlobalCompanyKey AssetsTotal) =_120
                          drop StockExchangeCode

                          restore
                          keep if StockExchangeCode == 90
                          rename (GlobalCompanyKey AssetsTotal) =_90
                          drop StockExchangeCode

                          // COMBINE POTENTIAL MATCHES & SELECT BEST 3, BREAKING TIES AT RANDOM
                          joinby NAICS DataYearFiscal using `SE11.dta'
                          gen double shuffle = runiform()
                          gen delta1 = assets_90/assets_120
                          gen delta2 = assets_90/assets_11
                          keep if inrange(delta1, 0.75, 1.25) & inrange(delta2, 0.75, 1.25)
                          replace delta1 = abs(log(delta1))
                          replace delta2 = abs(log(delta2))
                          gen delta = max(delta1, delta2)

                          // MATCHING WITHOUT REPLACEMENT
                          local allocation_ratio 1
                          local current 1

                          sort GlobalCompanyKey_11 DataYearFiscal (delta shuffle)
                          while `current' < _N {
                          local end_current = `current' + `allocation_ratio' - 1
                          while GlobalCompanyKey_11[`end_current'] != GlobalCompanyKey_11[`current'] ///
                          & DataYearFiscal[`end_current'] != DataYearFiscal[`current'] {
                          local end_current = `end_current' - 1
                          }
                          // KEEP REQUIRED # OF MATCHES FOR THE CURRENT CASE
                          drop if GlobalCompanyKey_11 == GlobalCompanyKey_11[`current'] & DataYearFiscal == DataYearFiscal[`current'] in `=`end_current'+1'/L
                          // REMOVE THE SELECTED MATCHES FROM FURTHER CONSIDERATION
                          forvalues i = 0/`=`allocation_ratio'-1' {
                          drop if GlobalCompanyKey_120 == GlobalCompanyKey_120[`current'+`i'] & DataYearFiscal == DataYearFiscal[`current' + `i'] & _n > `end_current'
                          }
                          local current = `end_current' + 1
                          }

                          export excel using "matches.xlsx", firstrow(variables) replace


                          Comment


                          • #28
                            First, as for separating into three separate data sets, the problem you are getting, "nothing to restore" arises because once you -restore- the first time, the originally -preserve-d data is gone. That is how -restore- works: it brings back the -preserve-d data but discards the -preserve-d copy. To prevent that discarding of the originally -preserve-d data, you have to add the -restore- option. Moreover, in order to use all three of these data sets later in the code, they all need to be saved in tempfiles. This gets us to the following as the starting point:
                            Code:
                            // SEPARATE INTO THREE DATA SETS
                            preserve
                            keep if StockExchangeCode == 11
                            rename (GlobalCompanyKey AssetsTotal) =_11 // SUFFIX IN RENAME MUST MATCH STOCK EXCHANGE
                            drop StockExchangeCode
                            tempfile SE11 // DECLARATION IN TEMPFILE MUST MATCH SUBSEQUENT USE OF THE FILE
                            save `SE11' // N.B. NO .dta
                            
                            restore, preserve
                            keep if StockExchangeCode == 120
                            rename (GlobalCompanyKey AssetsTotal) =_120
                            drop StockExchangeCode
                            tempfile SE120
                            save `SE120'
                            
                            restore
                            keep if StockExchangeCode == 90
                            rename (GlobalCompanyKey AssetsTotal) =_90
                            drop StockExchangeCode
                            tempfile SE90
                            save `SE90'
                            The next step is to combine SE120 with SE11. This can be done almost exactly the way it was done earlier. Take the code from #12 in this thread starting from

                            // COMBINE POTENTIAL MATCHES & SELECT BEST 3, BREAKING TIES AT RANDOM. (As always, change the variable names in that code to match your data set.)

                            And before the -joinby- command put in -use `SE120', clear-

                            At the end of that code, the matched pairs of SE120 and SE11 are in memory. The next step is to bring in the SE90.

                            To do that, we modify the matching code in the way described in #26. I should have also mentioned in #26 that the rest of the matching code should be modified so that _120 is replaced by _90 and _11 is replaced by _90. So the bringing in of SE90 looks like this:
                            Code:
                            joinby industry_code year using `SE90.dta'
                            drop shuffle
                            gen double shuffle = runiform()
                            gen delta1 = assets_99/assets_120
                            gen delta2 = assets_99/assets_11
                            keep if inrange(delta1, 0.75, 1.25) & inrange(delta2, 0.75, 1.25)
                            replace delta1 = abs(log(delta1))
                            replace delta2 = abs(log(delta2))
                            gen delta = max(delta1, delta2)
                            //    MATCHING WITHOUT REPLACEMENT
                            local allocation_ratio 1
                            local current 1
                            
                            sort id_120 year (delta shuffle)
                            while `current' < _N {
                                local end_current = `current' + `allocation_ratio' - 1
                                while id_120[`end_current'] != id_120[`current'] ///
                                    & year[`end_current'] != year[`current'] {
                                    local end_current = `end_current' - 1
                                }
                                //    KEEP REQUIRED # OF MATCHES FOR THE CURRENT CASE
                                drop if id_120 == id_120[`current'] & year == year[`current'] in `=`end_current'+1'/L    
                                //    REMOVE THE SELECTED MATCHES FROM FURTHER CONSIDERATION
                                forvalues i = 0/`=`allocation_ratio'-1' {
                                    drop if id_90 == id_90[`current'+`i'] & year == year[`current' + `i'] & _n > `end_current'
                                }
                                local current = `end_current' + 1
                            }
                            At that point you have your matched triplets in memory.
                            Last edited by Clyde Schechter; 15 Aug 2024, 13:22.

                            Comment


                            • #29
                              Thank you, yes, your point on the restore and preserve command makes sense. Annoyingly STATA produces this error: ". joinby NAICS DataYearFiscal `SE11.dta' / invalid name" which is odd given this piece of code worked for the simple brace pairing. Unsure if I have missed something in the code I am running, but it should be in line with your suggestions and modifications:

                              // SEPARATE INTO THREE DATA SETS
                              preserve
                              keep if StockExchangeCode == 11
                              rename (GlobalCompanyKey AssetsTotal) =_11 // SUFFIX IN RENAME MUST MATCH STOCK EXCHANGE
                              drop StockExchangeCode
                              tempfile SE11 // DECLARATION IN TEMPFILE MUST MATCH SUBSEQUENT USE OF THE FILE
                              save `SE11' // N.B. NO .dta

                              restore, preserve
                              keep if StockExchangeCode == 120
                              rename (GlobalCompanyKey AssetsTotal) =_120
                              drop StockExchangeCode
                              tempfile SE120
                              save `SE120'

                              restore
                              keep if StockExchangeCode == 90
                              rename (GlobalCompanyKey AssetsTotal) =_90
                              drop StockExchangeCode
                              tempfile SE90
                              save `SE90'

                              // COMBINE POTENTIAL MATCHES & SELECT BEST 3, BREAKING TIES AT RANDOM
                              use `SE120', clear
                              joinby NAICS DataYearFiscal `SE11.dta'
                              gen double shuffle = runiform()
                              gen delta = AssetsTotal_120/AssetsTotal_11
                              keep if inrange(delta, 0.75, 1.25)
                              replace delta = abs(log(delta))

                              joinby NAICS DataYearFiscal using `SE90.dta'
                              drop shuffle
                              gen double shuffle = runiform()
                              gen delta1 = AssetsTotal_90/AssetsTotal_120
                              gen delta2 = AssetsTotal_90/AssetsTotal_11
                              keep if inrange(delta1, 0.75, 1.25) & inrange(delta2, 0.75, 1.25)
                              replace delta1 = abs(log(delta1))
                              replace delta2 = abs(log(delta2))
                              gen delta = max(delta1, delta2)
                              // MATCHING WITHOUT REPLACEMENT
                              local allocation_ratio 1
                              local current 1

                              sort GlobalCompanyKey_120 DataYearFiscal (delta shuffle)
                              while `current' < _N {
                              local end_current = `current' + `allocation_ratio' - 1
                              while GlobalCompanyKey_120[`end_current'] != GlobalCompanyKey_120[`current'] ///
                              & DataYearFiscal[`end_current'] != DataYearFiscal[`current'] {
                              local end_current = `end_current' - 1
                              }
                              // KEEP REQUIRED # OF MATCHES FOR THE CURRENT CASE
                              drop if GlobalCompanyKey_120 == GlobalCompanyKey_120[`current'] & DataYearFiscal == DataYearFiscal[`current'] in `=`end_current'+1'/L
                              // REMOVE THE SELECTED MATCHES FROM FURTHER CONSIDERATION
                              forvalues i = 0/`=`allocation_ratio'-1' {
                              drop if GlobalCompanyKey_90 == GlobalCompanyKey_90[`current'+`i'] & DataYearFiscal == DataYearFiscal[`current' + `i'] & _n > `end_current'
                              }
                              local current = `end_current' + 1
                              }

                              Comment


                              • #30
                                Annoyingly STATA produces this error: ". joinby NAICS DataYearFiscal `SE11.dta' / invalid name" which is odd given this piece of code worked for the simple brace pairing. Unsure if I have missed something in the code I am running, but it should be in line with your suggestions and modifications:
                                Coding requires fanatical attention to detail. That piece of code in the quote did not work for the simple brace pairing--it is not what you ran before. And, yes, you have missed something in the code you are running: you have missed the -using- that should come before `SE11'. (And it should be `SE11', not `SE11.dta'.)

                                Now, you are admittedly in a difficult circumstance here, because I am posting code for a non-existent data set using variable names that differ from those in your real data set. And you therefore have no choice but to modify the code (or rename the variables in your data set!). And in making those modifications, it is easy enough to inadvertently change other things as well. (Had you posted example data in your first post, I would have either gone with your variable names, or if I found them too unworkable I would have included some -rename-ing code.) To minimize these problems, my suggestion is that you copy code from this Forum to your do-file using copy/paste: do not hand retype what you see here. Then use the do-file editor's Find and Replace functions to globally change my variable names to the names of the corresponding variables in your data set. This is most likely to minimize the kind of coding error we are talking about in this post.

                                Comment

                                Working...
                                X