RS episode #53: Parapsychology

In Episode 53 of the Rationally Speaking Podcast, Massimo and I take on parapsychology, the study of phenomena such as extrasensory perception, precognition, and remote viewing. We discuss the type of studies parapsychologists conduct, what evidence they’ve found, and how we should interpret that evidence. The field is mostly not  taken seriously by other scientists, which parapsychologists argue is unfair, given that their field shows some consistent and significant results. Do they have a point? Massimo and I discuss the evidence and talk about what the results from parapsychology tell us about the practice of science in general.

http://www.rationallyspeakingpodcast.org/show/rs53-parapsychology.html

5 Responses to RS episode #53: Parapsychology

  1. Max says:

    The premise of controlled experiments is that there are things that we know can’t work. So if a sugar pill appears to cure arthritis, we know that the effect is not from the sugar pill itself but from some hidden variable or bias.
    But if somebody claims that placebos themselves somehow cure arthritis, how can you test that?

  2. Maaneli Derakhshani says:

    Hello Julia,

    I just came across your podcast with Massimo on parapsychology. As you may know, I recently published an essay on RS on this topic as well. Here are some comments regarding what was said in the podcast:

    Julia:

    (1) You mentioned that some Ganzfeld meta-analyses showed positive and significant effects, and that some did not, and your example of one that did not was the Milton-Wiseman meta-analysis in 1999. However, I felt that that was a bit of a misleading characterization of the situation. As I mentioned in my RS post on parapsychology, there have been 8 independent, published Ganzfeld meta-analyses since 1985. And with the exception of the Milton-Wiseman meta-analysis, all of them have reported statistically significant effects well below the 1% level. Moreover, the Milton-Wiseman meta-analysis was later shown by statistician Jessica Utts to have used a fundamentally flawed statistical estimate of the effect size and significance level of the combined results. Utts showed that if you use a method that weighs each study by trial size (e.g. the exact binomial test), then the overall results are significant at the 4% level. I even wrote to Wiseman to ask for his response to Utts’ critique, and he confirmed to me that it was a valid criticism. So when one considers this, it turns out that 8/8 Ganzfeld meta-analyses since 1985 have found results that are statistically significant below the 5% level. Ergo, the results of the meta-analyses have actually been quite reliable.

    (2) Re the Bem-Honorton Ganzfeld meta-analysis in 1994 (which combined the results of 11 experiments from the PRL Ganzfeld lab over a period of 8 years), yes, the putative file-drawer was calculated by Honorton to be 15:1 (which Hyman agreed to). But the more important point is that Bem and Honorton explicitly declared that ALL trials were published, so that there is no file-drawer from the PRL lab.

    Massimo:

    (1) Re his comment about the effect size in Ganzfeld experiments being too large, it seems to me that it was based on a misunderstanding of the nature of effect sizes in social science. When parapsychologists say the effect size in Ganzfeld studies is “small”, they are simply using the convention of effect size measures that are standardly used in statistics and social science. For example, the standard measure of effect size called Pearson’s correlation coefficient, r, defines a “small” effect of r = 1 – 3, which is precisely the range within which the overall effect size from Ganzfeld studies falls. Also, with an effect size in this range, one cannot merely assume that positive results should always be obtained in every study. One has to do a power analysis (which is standard statistical practice for the design of any study), which tells you the probability of achieving a statistically significant result in a predicted direction, for a given effect size. For the 31.5% hit rate found in the latest Ganzfeld meta-analysis, and with a study of trial size N = 40 (which is the mean trial size of Ganzfeld studies in the latest meta-analysis), the statistical power to achieve a significant result at the 5% level is only 26%. And in the latest Ganzfeld meta-analysis, you’ll notice that 27/102 (or ~26%) of the studies were independently significant at the 5% level (which is still significantly above what would be expected by chance alone).

    (2) The comment about how a “hit” vs “miss” is defined in the Maimonides studies was incorrect, I’m afraid. The Maimonides studies were designed just like the Ganzfeld studies in that there was 1 randomly chosen target image and a set of decoy images (5 in the case of the Maimonides studies). A “hit” was defined as simply when the blind judge ranked the target image a 1st place match with the dream mentation of the receiver (they used rank-ordered judging). The probability of such a 1st place rank-matching occurring by chance is 1/6. Ergo, there is a 1/6 chance of a “hit”, and a 5/6 chance of a “miss”. That’s standard binomial statistics, and there is no ambiguity in it.

    (3) About the file-drawer, it seems he said that it is more of a problem for parapsychology than for other fields, because we don’t know much about the publication trends in parapsychology. But this is actually not true. Consider the following points: (a) The parapsych journals have had a policy since the 1970’s of publishing replication attempts with null results; (b) Susan Blackmore directly assessed the number of unpublished Ganzfeld studies in 1980 and found that 7/19 were positive and significant, and thereby ruled out the file-drawer for Ganzfeld research; (c) The fraction of Ganzfeld studies from the PRL lab that produced significant results was comparable to the fraction found by Blackmore; (d) the Ganzfeld procedure requires a special laboratory, Ganzfeld research is a small field in which all the researchers know each other, and there are a limited number of journals in which such results would be published.

    (4) Re his comment about the PEAR lab protocol, and the problem of making perfectly random random number generators, it doesn’t seem to take into account that they used a tri-polar protocol (where the intention of the human operator was randomly switched between HI, LO, and BASELINE). The PEAR lab data showed the HI and LO deviations from chance matched the tri-polar protocol exactly as predicted, which would be massively implausible if there were just small random biases on the random number generators. Even the 2 German groups found the exact same data trends, even though the trends never went above statistical significance.

    (5) Re his comment about the quality of the parapsych research being as good as the quality of research in mainstream psychology, I mentioned to him the NRC study by Harris and Rosenthal, the comment by Chris French, and the study by Rupert Sheldrake (comparing the rate of use of double-blind protocols in parapsych vs mainstream psych), all of which conclude that the best research in parapsych actually exceeds the quality of research in run of the mill mainstream psych.

    (6) Regarding the “lack of theory” in parapsychology, I gave him a couple references to theories that make empirically testable predictions in my rebuttal to his essay on this topic (e.g. Decision Augmentation Theory and Consciousness Induced Restoration of Time Symmetry). Both of them are consistent with the laws of physics, as far as I can tell (and remember that I am a theoretical physicist).

    (7) He gave the “more than a century” argument again. But in my rebuttal post, I cited the 1993 study by psychologist Sybo Schouten, who found that the total financial and human resources devoted to parapsych from 1880-1985, is equivalent to only 2 months worth of such expenditure in mainstream psychology research in 1985. So, caveat emptor!

  3. Maaneli Derakhshani says:

    By the way, I forgot to check off that I would like to be notified of follow up comments via email.

  4. Maaneli Derakhshani says:

    Also, some other corrections:

    (8) Massimo calls Bem’s study a “clairvoyance” study. But in fact it is not – it is a precognition study (hence the title, Feeling the Future).

    (9) Massimo strangely refers to Ray Hyman as a “partial skeptic”; but, in fact, Ray Hyman is a founding member of CSICOP, a CSI Fellow, and is widely considered to be the leading skeptical expert on parapsychology, within the skeptical community: http://en.wikipedia.org/wiki/Ray_Hyman

    (10) Massimo mentions Blackmore’s criticisms of Carl Sargent’s Ganzfeld experiments, but doesn’t mention that Blackmore’s criticisms were disputed by Sargent and his colleagues, and that no evidence of anything comparable to what Blackmore accused Sargent of has been found in Ganzfeld research at other labs. He also doesn’t mention that there have been dozens of other Ganzfeld researchers since Sargent who have found significant results in their experiments, and that removing Sargent’s experiments from the Ganzfeld meta-analyses doesn’t significantly impact the overall hit rate and statistical significance.

    (11) Massimo asserts that there is a problem of sensory leakage in the Ganzfeld studies, including those done at the PRL lab. But he doesn’t provide evidence or an argument for such alleged sensory leakage.

    Given all the above, I’d like to suggest that it may be reasonable to issue some corrections in the next podcast episode, as I believe that having so many factual errors on this topic is uncharacteristic of the quality of scholarship usually exhibited in the RS podcasts.

Leave a Reply to MaxCancel reply

Discover more from Measure of Doubt

Subscribe now to keep reading and get access to the full archive.

Continue reading