Sunday, 16 November 2014
Reposting a comment by Charlie Plott on the ESA mailing list, regarding ethical approval, which is so sensible that I would like it framed on the wall of every IRB in the land. My emphases.
I think that your assessment might be adjusted a bit. First, the currently evolving criteria regarding the protection of human subjects reflect evidence of harm as opposed to imagined harms. What people might imagine as sources of harm cannot be accepted as rules governing research and the tendency to confuse the two has been identified as a source of regulatory creep. The concept of "psychological harm" is a good case in point that brings "harms" that some minds might imagine (the mind of anyone who might be on an IRB) into the discussion as if it was demonstrated fact. I think that there is absolutely no evidence of harm in laboratory experimental economics experiments. Field experiments might be different but even there the examples typically deal with exposing people to police or social sanctions (e.g. publicly revealed as a homosexual) . Secondly, rules that some researchers might use to guide their own science are not rules that should be generalized as rules to impose for human subjects protection by and IRB. In particular, your list seems to confuse the two- especially with regard to deception, information about procedures, treatments and randomness.
1. A rule against deception is a black hole. The NRC devoted many pages to this and concluded that the subject suffered no harm if the discomfort did not last longer than the experiment. If the deception was related to self evaluation or self capacities then debriefing should be done because such deceptions can have lingering effects.
2. There is no need for a signed consent form if there is no exposure to harm beyond those experienced as part of daily life. Subject agreement to a database is enough. The requirement for written consent is also a black hole and prevents experiments with remote subject pools. In addition it invites long reviews of wording, legal review, etc. Current NRC thinking trashed the idea.
3. Subjects need to know what they will do in the experiment as a demonstration that they will not be exposed to harm. Information regarding treatments, interactions, etc. ... need not be part of information given to subjects unless it is related to risk. Such information can compromise the scientific integrity of the research. I never go over such details unless it is part of the controls I need.
4. Similarly, random procedures explanations might be something needed for experimental control in some types of experiments but that is not an issue for the protection of human subjects. The question to pose is whether or not the subject is at risk and what is the historical evidence of harm? Do you really want to exclude experiments in which subjects are surprised? Does surprise expose them to harm? I think not. Where is the evidence of harm?
5. Why must losses be avoided if the subject is informed about the possibilities? This would mean that serious gambling experiments are out of bounds.
Basically, there is a confusion between what might constitute procedures appropriate for scientific control and procedures needed for the protection of human subjects. The former can differ from experiment to experiment and should not be imposed as part of the latter.
at 10:33 am
Friday, 14 November 2014
Friday, 7 November 2014
Tuesday, 4 November 2014
Gapminder has received a lot of attention for its cool animated graphics. I downloaded their data and ran:
palette(adjustcolor(rainbow(23), 1, .6, .6, .6,
gm <- anim.plot(lifex ~ GDP + year, data=gm_data, log="x",
cex=sqrt(pop)*0.0004, pch=19, col=region, xlab="GDP",
ylab="Life expectancy", speed=5, subset=year>1900)
anim.save(gm, file="gapminder.mp4", type="Video")
Holograms and talking Hans Rosling will be included in the 2.0 release ;-)