Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • MSc dissertation question on validity and statistical methods

    Hello everybody,

    I am currently writing my MSc dissertation. I am writing about the public acceptability of a hypothetical environmental tax on certain foods in the USA and am interested to see
    whether earmarking revenue for certain purposes increases acceptance of respondents.

    I have collected my data via a survey using Amazon MTurk. Survey respondents were allocated either a set of low tax rates or a set of high tax rates and asked about their acceptance of that tax on a 7 point Likert scale (1 = Strongly opposed to 7= Strongly in favor). I then inform them how much the expected revenue is and that the money will be earmarked. I then present them with some options which they are told to allocate funds to. My idea was that this allows me to see which purposes are most popular and may be particularly conducive to increasing acceptance. After respondents got to allocate the funds as they chose, I ask them again about their acceptance to see if it has changed.

    So it is sort of a Pre-test - Treatment - Post-test design, but I have no control group, because I had thought the pre-test could function in a similar way, but now I am not so sure anymore whether it is very valid to do it that way. My supervisor did object to what I did when I presented it to him, so I thought it was okay to do it in that way.

    I have my survey results (N=400; N=200 per "tax rate set") and found that when asking about acceptance the second time, after the "earmarking" took place, responses tended to be markedly more acceptive of the tax, i.e. in a higher category of the scale. When I tabulate acceptance before and after earmarking, I can really see a strong difference, both for the low and the high tax group. The high tax group has less responses in the highest categories but there was a shift to higher categories nonetheless). I am aware that respondents might be thinking something like "He wants me to be more in favor now - okay I am going to do this guy that favor" and that this biases the responses the second time. But can the effect really be that strong? Moreover, I am aware that the "Treatment" is not identical or homogenous because everybody gets to choose what they want to use the money for. So the only "homogenous" thing is that earmarking of revenues occurs - which, again, is what I am most interested in - how does acceptance of an unpopular tax in an individual change when the money is earmarked the way they prefer.

    I had planned on using an ordinal logistic regression with acceptance as dependent variable and had considered introducing "earmarking" as a binary variable. I had hoped that I could construct that variable from the difference between the pre- and post- values of stated acceptance because the "Treatment" of earmarking is the only thing that has occured in between. Is it possible to do that or would that be nonsensical?

    Probably you stats experts are cringing upon reading what I am doing! It's the first time I am doing something like that and I think I underestimated how complicated this would get. I also have relatively little time to do all this so I guess that's why some things are not so well thought through as they could/should have been!

    So can I do any meaningful statistical analysis on this or am I basically limited to a simple before-and-after comparison and have to state that no causality can be inferred from this but that further research could look into that blablabla? I guess if needs must I could still change my questionnaire to a RCT style design with only one set of tax rates and two (or three) purposes for earmarking assigned to two (or three) groups and a control group without any earmarking. But of course I would like to avoid that as it will cost me time and money...

    Sorry for the long post - I hope you could understand what my problems (basically 1. validity and 2. statistical model selection and variable coding) and can help me. If you need me to provide more information, I can do that.

    Best regards,
    Philipp

  • #2
    Philipp:
    welocme to the list.
    I would like to be wrong, but such a long query usually receives a limited (if any) number of replies, mainly due to listers' time constraints.
    In fact, questions like yours are better addressed to teacher/supervisor/professor (who are paid for that, by the way).
    I would recommend to post a more parsimonious query. Thanks.
    Kind regards,
    Carlo
    (Stata 19.0)

    Comment


    • #3
      Well, it seems to me that your design tests the question "What happens to a person's attitude towards a tax if that person is offered the opportunity to have the proceeds of the tax allocated to his or her personally preferred purposes.?" It is really difficult to imagine how a situation like that would ever arise in the real world. Your design will approach that question, but I don't see how this question is applicable to the real world.

      There are other issues here. MTurk provides a sample of convenience, not one from which generalization to actual political populations can really be supported. Some have also questioned whether MTurk responders give truthful responses. That aside, you have only a pre-post design here: there was no control group not given the opportunity to allocate the proceeds. So you cannot, from your data, exclude the possibility that something else that transpired during the interaction with the respondent is responsible for the shift in opinion.

      If this were a doctoral-level dissertation, I would unquestionably advise you to scrap what you have and start over with a better design. Whether a design like this would be acceptable for a master's thesis probably depends on the standards at your institution, and perhaps on standards in your own department. Personally, I probably would reject this at (or before) the proposal stage, but others might feel differently about how valid the research needs to be for a master's thesis. If your advisor doesn't have a problem with it, assuming your advisor has been in your institution and department long enough to have a sense of what is acceptable there, then you are probably on safe grounds proceeding. (If your advisor is new to the environment, then I think you would be well advised to get another opinion from a different faculty member who is familiar with local requirements.)

      Comment


      • #4
        Clyde:
        Thank you for your honest response and for taking the time - so my worries regarding validity were justified. I am aware of the limitations of MTurk, I know it is not representative and this is also not expected of us. Getting a fully representative sample with a student's limited means is quite hard without funding.

        At least MTurk provided me with a sample that maps relatively well on population clusters within the US, despite being biased in some regards. More representative than sharing it on Facebook, like many students do, at any rate. Our master thesis is more about applying a methodology to our data correctly regardless of representativeness, so this is my greatest concern right now and it seems that there are some things I should worry about.

        If I were to gather new data to do a RCT with one of the tax rate sets, could I not use the attitude towards the tax that was stated first (i.e. before asking them to allocate proceeds) from the sub-sample I already have that had that tax rate as a control group? That is assuming that the questionnaire does not change up to that point, which would not be required I believe. That way I "just" would need to collect my treatment samples (now with treatments that are identical across individuals in the sample). Could that work.

        Carlo: Thank you for your advice and sorry for the long post - I just felt that I should provide that information so that people know where I am coming from. I will try to cut it shorter in the future.

        Thank you all very much.

        Comment


        • #5
          I'm not sure I understand the design you are proposing in the third paragraph of #4. Currently you ask the opinion question twice, once before and once after an intervention. The intervention itself is rather flawed, we agree, so you want to instead use a better intervention. I'm not sure what that will be, but that is part of what you are tasked with figuring out in doing a master's thesis. And presumably you will ask the opinion question both before and after the new intervention, as the change in opinion is your outcome of interest, if I understand the situation correctly. So the control group has to look just like that, except that the intervention should be an inert "attention control." That is, the control group must be asked the same question, and then subjected to some kind of interaction that is roughly as engaging and time-consuming as the new "real" intervention. Then they must be asked the opinion question again. That would give you a controlled trial. And, of course, you would randomize assignment to the two arms of the study.

          So I don't see any way you can simply re-purpose the data you already have in service of this design. This group of people was exposed to an intervention, but it certainly could not be considered an inert attention control. They also were sampled at one era in time, so they are, at best, "historical" controls for any new intervention. And I don't see how the post-intervention responses in your current data could be used at all, given that they followed an active intervention. Using just the pre-intervention responses doesn't enable you to make a comparison to the change between pre- and post- in the new intervention group.

          The closest I can think of to a way to make use of your existing data would be something like this:

          The new intervention group would not have a pre-test/intervention/post-test design. Rather they would be presented a description of the tax plan along with a description of a proposed allocation of the proceeds, and then they would be asked their opinion. That single response might be considered somewhat comparable to the pre-intervention response in your existing data. There are problems: the controls are historical rather than concurrent, and you would need to be sure that the description of the tax itself in the new intervention group is identical to that given originally. But then you have another problem: describing the allocation will add time to the amount of interaction with the subjects prior to administering the opinion question, so that becomes another difference. I think this design could pass muster for a master's thesis. But don't take my word for that: run it by your advisor--the local faculty have the last word on these things.

          Comment


          • #6
            Thank you again Clyde, I really appreciate it. I apologise if my description in the previous post was somewhat unclear, but actually I was pretty much proposing what you wrote in the last paragraph: Just using the pre-intervention results from the existing data and then add intervention groups that only get the treatment and the post-test. I believe the interaction time difference would still be acceptable and I will try to minimise the time that describing the intervention takes as much as possible. I got my data last week, and I cannot imagine that the marker would be so strict as to consider that time difference an issue if I collected the new data soon!

            Since I can only collect data for a limited number of intervention groups, I reckon it would make sense to have only one type of purpose for earmarking per intervention (e.g. spending for social security OR environment/energy and not combinations thereof) so that I could also see whether earmarking for some purposes is more/less popular than for others. Is that correct?
            It might be interesting to have combinations as well, but I think I cannot really do that.

            Another thing I am wondering - if I had a pre-test/intervention/post-test in the treatment groups, could that not introduce the type of bias I was worried about in the first post, namely that respondents feel they are expected to change opinion? Would a treatment/post-test design not avoid this type of bias better? Then the research question would be something like: "Does respondents' initial acceptance (upon first hearing of the proposed policy) differ based on whether the proposal contains earmarking?".

            Comment


            • #7
              Since I can only collect data for a limited number of intervention groups, I reckon it would make sense to have only one type of purpose for earmarking per intervention (e.g. spending for social security OR environment/energy and not combinations thereof) so that I could also see whether earmarking for some purposes is more/less popular than for others. Is that correct?
              It might be interesting to have combinations as well, but I think I cannot really do that.
              Yes, you have to live within your resources and you are better off concentrating them on a single intervention than getting inadequate samples on each of several different interventions.

              Another thing I am wondering - if I had a pre-test/intervention/post-test in the treatment groups, could that not introduce the type of bias I was worried about in the first post, namely that respondents feel they are expected to change opinion? Would a treatment/post-test design not avoid this type of bias better? Then the research question would be something like: "Does respondents' initial acceptance (upon first hearing of the proposed policy) differ based on whether the proposal contains earmarking?".
              Sorry I neglected to answer this earlier. I think you are correct. But there is a way around this that is often used in opinion polling: you do the original opinion question. Then you give the intervention. Then instead of repeating the same opinion question you ask "Based on this additional information, is your opinion about this tax: 1. Much less favorable, 2. Somewhat less favorable, 3. Unchanged, 4. Somewhat more favorable, or 5. Much more favorable" or something to that effect. This question is posed to specifically counter the presumption that a change in opinion (and in a particular direction) is expected. I'm not 100% confident this completely eliminates the kind of bias you fear, but I think it does reduce it.

              Comment


              • #8
                Thank you for your help, Clyde! I guess if I go forward with the treatment/post-test design I will not have to worry about this type of bias. But if I should start from scratch and recollect everything I think I would ask in that way. I even had thought about asking in that way before, what a shame.

                Thank you so much for your help, this was really great input. I have contacted my supervisor and will have to wait and see what he says.

                Comment

                Working...
                X