Table of Contents
About Informed Consent
Is informed consent always necessary when considering the ethics of social science experiments? For some experiments, obtaining informed consent could affect the results (if people are aware of what the researchers are trying to do). Exactly how much are the researchers required to inform the participants about the experiment?
Yes, informed consent is an essential element of research ethics.
Generally speaking, we have to inform the participants the purpose of our research (e.g. “This is a study about attitudes towards political candidates), though we do not have to tell them the exact hypothesis of the study.
It is also important that the informed consent form has to let the participants know if there is any potential benefits or harms by taking part in the study, any compensations or incentives, confidentiality or privacy of the data, their rights to decline and to withdraw, so they can make an informed decision about participating. In most cases, political science experiments only involve “minimal risks”, i.e. about the same probability and magnitude of harm we would experience in daily life.
For more details on the important elements to include when obtaining informed consent, see this guide from Pitt IRB, or American Psychological Association (APA) ethics code (Section 8.02). It is possible to request waivers with adequate justification (see here for an overview of the requirements).
Does knowing you are part of an experiment affect how they respond? Is there a way to minimize the effects of this on the outcome?
Quite likely! One possibility is Hawthorne effect: simply being part of the experiment and knowing that you are being observed might change your behavior or how you respond, compare to everyday life scenario.
A more general phenomenon (some argue subsumes the Hawthorne effect) is called demand characteristics (also see textbook p.178), referring to how participants’ interpretation of the experiment’s purpose could potentially change their behaviors (e.g. behave in ways conforming to what they think the researchers want to observe, or they might behave in ways contradicting to what they perceived as the researchers’ hypothesis).
It is worth noting however, that not all experiments are equally affected by this potential problem. We might expect that experiments looking at behaviors that are more susceptible to social desirability bias are more vulnerable to bias introduced by demand characteristics, than those looking at more benign phenomenons.
While it is difficult to eliminate this effect completely in most experiments, some strategies exist. For example, researchers can devise a design that uses covert or unobtrusive treatments, so the participants are unaware that they are part of an experiment (e.g. Enos 2014, Sands 2017).
Deception is another common, though deeply controversial strategy. For example audit experiments often rely on deception to examine socially undesirable behaviors such as discrimination (e.g. Butler and Broockman 2011), norms or rules violation (e.g. Findley, Neilson and Sharman 2014).
Of course, the use of deception always has to be justified in the ethics review process. See this newsletter (p.13-19) for further discussion on the ethics of using deception in field experiment involving public officials as subjects.
About the Montana GOTV Experiment
The Montana experiment misled the people by using official seal. How did they get the project approved in the first place?
Only the people involved in the process would ever know! If I were to hazard a guess (take it with many many grains of salt), it is possible that the review process did not see the mailer as being intentionally misleading. Among the commotion in the follow-up to this controversy, one detail about the mailer did not get much attention — there was in fact a disclaimer line disclosing that the mailer is part of a academic study (below the boxes indicating candidate ideology).
Mailer from the experiment. Squint a little to see the disclaimer. Retrieved from Internet Archive
Maybe it’s too much of a fine print, but it’s there. So you could make the argument that they are not actively trying to deceive the recipient about who is sending the mailer out, and this might be part of reason why the proposal was approved. Again, I have to emphasize that this is all speculations on my part.
How did the Montana experiment affected people’s decision? I don’t see a discussion on how it actually influenced the turnout or election outcome.
We might never know! After the whole debacle, the study is unpublishable. Partly due to the ethical issue, partly because the data is likely unusable, given the spillover/contamination effect caused by the news coverage. After the news outlets reported about the story, those in the treatment group who have received the mailer would have known about where this mailer comes from and why they are receiving it (treatment is contaminated by extraneous factors that the researchers did not intend to provide), and those in the control group would also have known about the information in the mailer despite not receiving one (treatment spillover).
About Experiments on Development Programs
Are there examples of ethical and effective anti-poverty experiments?
There are many examples of using randomized experiments to evaluate the impacts of anti-poverty programs. Some good places to look for them: Poverty Action Lab (J-PAL) (research center at MIT), GiveWell (nonprofit focused on effective charities).
Not a question, just an interesting observation – in the US during the 1960s-70s, there was a similar program to Universal Basic Income. It was ended after there was an increase in divorce rate.
Hmm, this is really interesting to know. So if the experiment shows that UBI improves some aspects of life quality (e.g. household income, children’s education), but also has other “side-effects” such as increases divorce rate, from the policy-makers’ position, what should they make of this? What kind of “side-effects”, or how much, would be considered as a “reasonable” level of trade-off? Back to what we discussed in the beginning of the course, empirical evidence does not always lead to a neat solution to normative questions.