A PROPOSAL AND CHALLENGE FOR PROPONENTS AND SKEPTICS OF PSI

For Psychic discussions and general questions.

Moderators: eye_of_tiger, shalimar123

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

A PROPOSAL AND CHALLENGE FOR PROPONENTS AND SKEPTICS OF PSI

Post by Dj I.C.U. » Thu Jun 29, 2006 10:15 pm

By J.E. Kennedy

ABSTRACT: Pharmaceutical research provides a useful model for doing convincing research in situations with intense, critical scrutiny of studies.  The protocol for a "pivotal" study that is used for decision-making is reviewed by the FDA before the study is begun.  The protocol is expected to include a power analysis demonstrating that the study has at least a .8 probability of obtaining significant results with the anticipated effect size, and to specify the statistical analysis that will determine the success of the experiment, including correction for multiple analyses.  FDA inspectors often perform audits of the sites where data are collected and/or processed to verify the raw data and experimental procedures.  If parapsychological experiments are to provide convincing evidence, power analyses should be done at the planning stage.  A committee of experienced parapsychologists, moderate skeptics, and a statistician could review and comment on protocols for proposed "pivotal" studies in an effort to address methodological issues before rather than after the data are collected.  The evidence that increasing sample size does not increase the probability of significant results in psi research may prevent the application of these methods and raises questions about the experimental approach for psi research.
-----------------------------------------------------------------------------------------------

In recently reading the 1988 Office of Technology Assessment report on experimental parapsychology (Office of Technology Assessment, 1989), I was stuck by two topics: the optimism for meta-analyses and the suggestion that proponents of psi and skeptics should form a committee to evaluate and guide research.
In the decade and a half since this report, the use of meta-analyses has become more common, and the controversial aspects and limitations have become more clear.  Meta-analysis is ultimately post hoc data analyses when researchers have substantial knowledge of the data.  Evaluation of the methodological quality of a study is done after the results are known, which gives opportunity for biases to affect the meta-analysis.  Different strategies, methods, and criteria can be utilized, which can give different outcomes and opportunity for selecting outcomes consistent with the analyst's expectations.  The meta-analysis results can vary as new studies become available, which raises the possibility of optional stopping and selective reporting.  The various controversies over meta-analyses with the ganzfeld demonstrate these issues (Milton, 1999; Schmeidler & Edge, 1999; Storm, 2000).
Page 158


Bailar (1997) described similar conclusions from the experience with meta-analysis in medical research:
It is not uncommon to find that two or more meta-analyses done at about the same time by investigators with the same access to the literature reach incompatible or even contradictory conclusions.  Such disagreement argues powerfully against any notion that meta-analysis offers an assured way to distill the “truth” from a collection of research reports. (p. 560)

The research strategies and procedures in parapsychology stand in marked contrast with pharmaceutical research, through which I now earn my livelihood.  The level of planning, scrutiny, and resulting evidence is much higher in pharmaceutical research than in most academic research, including parapsychology.
Pharmaceutical research offers a useful model for providing convincing experimental results in controversial situations.  Key aspects of this research process are described below.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:34 pm, edited 2 times in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

BASIC PHARMACEUTICAL RESEARCH

Post by Dj I.C.U. » Thu Jun 29, 2006 10:15 pm

A company that wants to provide convincing evidence that a new product is effective begins by doing a few or several small exploratory or pilot studies. These are called Phase 1 and Phase 2 studies and are used to develop the methods of administering the product and effective dose as well as providing initial evidence for the benefits and potential adverse effects in humans.
When the researchers believe that they know the effective dose and can deliver it reliably, and that the effectiveness may be sufficient to be profitable, they plan a “pivotal Phase 3” study. This is a study that is intended to provide convincing evidence and is normally a randomized experiment.  The study protocol describes the study procedures, specific data items to be collected, patient population, sample size, randomization, and planned analyses.  The general statistical methods expected by the U.S. Food and Drug Administration (FDA) and corresponding agencies in many other countries are described in “Guidance for Industry: E9 Statistical Principles for Clinical Trials” (available for downloading at no charge from http://www.fda.gov/cder/guidance/ich_e9-fnl.pdf.  This document is excellent guidance for anyone doing experimental research in controversial settings and is part of the international standards for pharmaceutical research that are being developed by the International Conference on Harmonisation (ICH).
The protocol is expected to include a power analysis demonstrating that the study sample size has at least .8 to .9 probability of obtaining significant results if the effects are of the assumed magnitude.  Sensitivity analyses exploring a variety of deviations from the assumptions in the power analyses are recommended, and are important for the company as well as for the FDA.  
The single “primary variable” that will determine the success of the study is specified, as is the specific statistical analysis, including any covariates.  If there is more than one primary outcome analysis, then correction for multiple analyses is expected to be specified in the protocol. There are usually several “secondary variables” that are used as supporting evidence and are handled more leniently than the primary outcome, but still all variables and the basic analysis plan should be specified in the protocol.
Prior to beginning the study, the protocol is submitted to the FDA for review and comments.  This normally involves discussions and revisions.  The company is not legally required to follow the FDA's suggestions at this stage, but it is clearly wise to reach agreement before starting the study.
For most products, two pivotal Phase 3 studies are required.  Both follow the criteria and process described above.  The two studies may be done sequentially or concurrently.  If the results do not turn out as expected, additional studies may be needed.
When the studies are completed and the company is ready to submit the application for approval, the full study reports for all studies (including Phases 1 and 2) are submitted to the FDA along with listings of all data and usually electronic copies of the data.  There is also a section on “integrated analyses” that combines the data from the studies.  The FDA increasingly evaluates applications by performing its own analyses of the electronic data.
It is common for the FDA to send inspectors to the site(s) where data were collected and/or processed to verify the raw data and review the procedures for data collection and processing.  This site audit specifically verifies that the procedures stated in the protocol were followed and that the raw data records match the computer database to a high degree of accuracy.  If there are discrepancies or if the data collection involved particular reliance on electronic data capture, the audit may include evaluating the data processing systems and programs.  Usually, security and restricted access to the data and relevant data processing systems are also significant issues for site audits.  Companies usually have internal quality control procedures that double and triple check all research activities in anticipation of being audited.
After the FDA has all relevant information, it may make a decision internally, or it may convene a scientific advisory board that reviews the information, asks questions to the company and the FDA, and makes recommendations.  An advisory board is likely if the studies produce results that are equivocal.  Any deviation from the protocols in procedure or analysis must be explained and can be a significant obstacle.  
It may be worth noting that workers in pharmaceutical research generally do not take offense that everything they do, including the simplest tasks, is questioned and double or triple checked.  In fact, these quality control efforts reveal a surprising number of mistakes and oversights.  The attitude quickly becomes one of working together as a team to overcome the human tendency to make mistakes.  The redundant checking is taken as an indication of how important the project is.  Pharmaceutical researchers generally view academic research as having much lower quality and find that it takes substantial effort to retrain academic researchers to meet the higher standards.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:28 pm, edited 2 times in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

A PROPOSAL FOR PARAPSYCHOLOGY

Post by Dj I.C.U. » Thu Jun 29, 2006 10:16 pm

Given that the research processes described above are my standard of reference now, I do not expect that the current meta-analysis approaches in parapsychology will provide convincing evidence for even mild skeptics.  The meta-analysis strategy in parapsychology seems to be to take a group of studies in which 70% to 80% or more of the studies are not significant and combine them to try to provide evidence for an effect.  This is intrinsically a post hoc approach with many options for selecting data and outcomes.
More generally, the usual standards of academic research may not be optimal for addressing controversial, subtle phenomena such as psi.  Because of the relatively high noise levels in academic research, widespread independent replication is usually required for evidence to become convincing. Phenomena that are more subtle and difficult to replicate may require a lower noise level for convincing evidence and scientific progress.  
It appears to me that preplanned analysis of studies with sufficient sample size to reliably obtain significant results is necessary to provide convincing experimental results and meaningful probability values in controversial settings such as parapsychology.  Sample sizes should be set so that the probability of obtaining significant results is at least .8 given a realistic psi effect.  This is a substantial change from the current practice in which studies are done with little regard for statistical power and only about 20% to 30% are significant, which results in controversy and speculation about whether the predominately negative results are due to a lack of psi or a lack of sample size.  Performing a prospective power analysis is simply doing what statisticians have long recommended.
If the claims that meta-analyses results provide evidence for psi are actually valid, then this approach of prospective power analysis and study planning will be successful.  If this approach will not work, then the application of statistical methods in parapsychology, including meta-analyses, will not be convincing.
From my perspective now, it would make good sense to form a committee consisting of experienced parapsychologists, moderate skeptics, and at least one statistician to review and comment on protocols for pivotal experiments prior to the experiments being carried out.  The committee could also do independent analyses of data, verify that the analyses comply with those planned in the protocol, and perhaps sometimes do site inspections.  The possibility of a detailed, on-site, critical audit of the experimental procedure and results provides a healthy perspective on methodology.  It would be valuable to have this option available even if it is rarely or never used.
The idea of a registry to distinguish between exploratory and confirmatory experiments has been suggested several times over the years (e.g., Hyman & Honorton, 1986; also see comments in Schmeidler & Edge, 1999).  This strategy would allow researchers more freedom to do exploratory studies as long as the confirmatory or pivotal studies are formally defined in advance.
The present proposal is an extension of the registry idea that would also attempt to resolve much of the methodological controversy before rather than after a study is carried out.  The most efficient strategy to obtain a consensus is to have those people who are critical provide input and agree on the research plan from the beginning.  The net effort to carry out a study and answer criticisms may actually be less and the final quality of evidence should be substantially higher.  
This strategy is consistent with the idea that only certain experimenters can be expected to obtain positive results and does not require that any experimenter, no matter how skeptical, must be able to consistently obtain significant results.  Thus, this strategy is reasonably consistent with the known characteristics of psi research.
This strategy also allows starting with a clean slate for evaluating research.  The studies that comply with this process can stand as a separate category to determine whether there is evidence for psi.  Given the higher quality of each pivotal study, there would be less need for many replications, and experimenters would have more freedom to capitalize on the novelty effect of starting new studies.  A pre-specified analysis and criteria could be set for determining whether a group of studies provides overall evidence for psi.  This could focus on certain experimenters with a track record of success, rather than expecting any and all experimenters to be successful.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:30 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

Challenges for Skeptics

Post by Dj I.C.U. » Thu Jun 29, 2006 10:16 pm

I expect that many of the more extreme skeptics will be hesitant to participate, or more likely, simply never be able to agree prospectively that a protocol is adequate.  These skeptics appear happy to devote many hours to after-the-fact speculations and listing deficiencies in past experiments, and they claim that convincing experiments are certainly possible, but they will find it very uncomfortable to  specify prospectively that a study design is adequate to provide evidence for psi.  These skeptics must recognize that their beliefs, arguments, and behavior are not scientific.
The members of the committee would have to be people who agree with the principle that experimental research methods can be used to obtain meaningful evidence (pro or con) in parapsychological research, and they would have to be willing to support and adhere to the standards of scientific research, no matter what the outcome.
These proposals would also limit the skeptical practice of doing a large number of post hoc internal analyses for studies that are significant and then presenting selected results as worrisome.  If a skeptic (or proponent) believes that certain internal consistencies are important, then appropriate analyses, including adjustment for multiple analyses, can be pre-specified in the protocol.  Post hoc data scrounging by skeptics would be recognized as a biased exercise with minimal value, as is post hoc scrounging of nonsignificant data to try to find supportive results.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:30 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

CHALLENGES FOR PROPONENTS

Post by Dj I.C.U. » Thu Jun 29, 2006 10:16 pm

Parapsychologists may be skeptical of these proposals because they believe that psi is not sufficiently reliable to carry out this type of research program.  Attempts to apply power analyses to a phenomenon that has the experimenter differences and declines found in psi research bring these reliability issues into focus (Kennedy, 2003a).  However, if these reliability issues preclude useful experimental planning, then the effects are not sufficiently consistent for convincing scientific conclusions.
The declines in effects across experiments are particularly problematic for planning studies and prospective power analysis.  For example, the first three experiments on direct mental interactions with living systems carried out by Braud and Schlitz each obtained consistent, significant effects with 10 sessions (reviewed in Braud & Schlitz, 1991).  Six of the subsequent experiments had 32 to 40 sessions, which prospectively would be expected to have a high probability of success given the effects in the first three studies.  However, only one of the six experiments reached statistical significance.
The declining effects and corresponding need for large sample sizes makes research unwieldy, expensive, and prone to internal declines.  For the 33% hit rate found in the early ganzfeld research (Utts, 1986), a sample size of 192 is needed to have a .8 probability of obtaining a .05 result one-tailed.   Broughton and Alexander (1997) carried out a ganzfeld experiment with a preplanned sample size of 150 trials.  The overall results were nonsignficant and there was a significant internal decline.  Similarly, Wezelman and Bierman (1997) reported overall nonsignificant results and significant declines over 236 ganzfeld trials obtained from 6 experiments at Amsterdam.  On the other hand, Parker (2000) reported overall significant results and no declines in 150 ganzfeld trials obtained from 5 experiments.  Likewise, Bem and Honorton (1994) reported overall significant results without declines in 329 trials obtained from 11 experiments.  These latter three reports apparently summarized an accumulation of studies that were carried out without pre-specifying the combined samples size, but they do raise the possibility that preplanned studies with adequate sample size may be possible without internal decline effects.  
However, the overriding dilemma for power analysis in psi research is that increasing the sample size apparently does not increase the probability of obtaining significant results.  In addition to the Braud and Schlitz (1991) studies described above, the z score or significance level was unrelated to sample size in meta-analyses of RNG studies (Radin & Nelson, 2000) and early ganzfeld studies (Honorton, 1983).  Equivalently, effect size was inversely related to sample size in RNG studies (Steinkamp, Boller & Bosch, 2002) and later ganzfeld studies (Bem & Honorton, 1994).   Contrary to the basic assumptions for statistical research, sample size does not appear to be a significant factor for the outcome of psi experiments.  
In medical research, finding larger average effect sizes in studies with smaller sample sizes is an established symptom of  methodological bias, such as publication bias, selection bias, methodological quality varying with study size, and selected subject populations (Egger, Smith, Schneider, & Minder, 1997).  These biases are evaluated by examining a plot of effect size against sample size to see if the distribution differs from the expected symmetric inverted funnel shape.  This method has revealed apparent biases in meta-analyses for which the conclusions were contradicted by subsequent large studies (Egger, Smith, Schneider, & Minder, 1997).  
However, for psi experiments, the inverse relationship between effect size and sample size may be a manifestation of goal-oriented psi experimenter effects (Kennedy, 1994; 1995) and decline effects (Kennedy, 2003b) rather than methodological bias.  Whether or not it is caused by psi, this relationship and the associated failure to find larger z scores with larger sample sizes have ominous implications for planning and interpreting psi experiments.
  For these and other reasons,  I personally doubt that psi has the properties needed for experimental research as historically attempted in parapsychology and as assumed for the proposals here.  Alternative research strategies and more innovative statistical methods may be needed (Kennedy, 1994, 1995, 2003b).  The hypothesis of goal-oriented psi experimenter effects appears to be a good fit to the data.  Psi effects by definition do not conform to known physical laws, but the psi effects in experiments are assumed to conform to the standard physical properties and processes of experimental research.  It does not surprise me to find evidence that this assumption is incorret.However, I recognize that the majority of parapsychologists, particularly proponents of meta-analysis, probably dispute my views on this.  The proposals presented here appear to me to be the optimum way to directly evaluate whether the assumptions of experiments and meta-analyses are valid for parapsychology, and to advance the field if these methods apply.  
Whether one believes that the standard statistical methods are adequate for psi research or that more innovative approaches need to be developed, the level of planning in research programs must be substantially increased if parapsychology is to advance beyond its present state of uncertainty, controversy, and reliance on post hoc analyses.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:32 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

METHODS FOR INVESTIGATING

Post by Dj I.C.U. » Thu Jun 29, 2006 10:17 pm

EVIDENCE FOR GOAL-ORIENTED PSI

Reviews of the literature have concluded that the data as a whole support the goal-oriented psi hypothesis for the trials in psi experiments (Kennedy, 1978; 1979; Stanford, 1977). Similar conclusions have appeared throughout the history of parapsychology.  Early experimental researchers described psi as "unitary" or "diametric" because success on psi tasks seemed to be achieved directly, without adhering to the usual properties of information processing (Foster, 1940, Pratt et al., 1940).  For example, comparisons of the scoring rates on blind matching ESP tasks (in which the subject matched two unknown cards) with normal clairvoyance (in which the subject guessed one unknown card) suggested that blind matching was a unitary or one-step process rather than a two-step process of first identifying each card as in normal information processing.  The results of blind PK tasks (in which the identity of the target for a PK trial must be obtained by paranormal means) compared with normal PK also supported the unitary or goal-oriented nature of psi (Kennedy, 1978; Stanford, 1977).  More generally, Stanford (1977) pointed out that success on PK tasks does not depend on the degree that the subject understands the workings of the random process that must be influenced.  Thus, psi appears to bypass many normal information processing steps and limitations.  
Schmidt's (1974) direct comparison of majority-vote trials and single-event trials is one of the most definitive experiments on goal-oriented psi.  In this study, the subjects initiated each PK trial with a button press and attempted to influence which of two lights came on.  On about half the trials the decision for the lights was determined by one event from an RNG and on the other trials the decision was determined by the majority-vote of 100 events from a different RNG.  The two types of trials were randomly mixed and the subject and experimenter were blind as to which type of trial was occurring on any button press. Schmidt carried out the study specifically to investigate the goal-oriented psi hypothesis and apparently expected the results to support this hypothesis.
Consistent with the goal-oriented psi hypothesis, the scoring rates were approximately the same on both types of trials and were significantly different from the usual signal-enhancement assumptions for majority-votes.  The scoring rate in the single-event condition was 55.93 percent (z=5.55).  Given this scoring rate on individual events, a scoring rate of over 90 percent would be expected on the majority-vote trials under the usual communication theory assumptions for signal enhancement with majority-vote procedures (Kennedy, 1978).  The observed scoring rate on the majority-vote trials was 53.16 percent (z=2.89), which is not significantly different from the single event trials, but is very significantly different from 90 percent.
Although the available data strongly support the goal-oriented psi hypothesis for certain aspects of information processing, the data are much less clear for other aspects of information processing.  Goal-oriented psi appears to apply with tasks that involve (a) redundant opportunities for psi to operate as in majority-vote procedures, and (b) varying amounts of information about the psi task as with blind PK.  However, task complexity or information processing pertaining to the a priori probability of a hit remains relatively unexplored despite the long recognition that this factor provides important insights into the psi process (Kennedy, 1978; Scott, 1961; Thouless, 1935).  Various aspects of information processing are summarized in the appendix.  The main body of this paper focuses on redundant opportunities for psi to operate because this factor has profound implications for scientific research and the evidence is strong that psi can function in a goal-oriented manner relative to this factor.
The strong evidence that psi can be goal-oriented on the level of individual trials suggests the logical extension of the goal-oriented psi hypothesis to problematic areas such as experimenter effects.  However, this extension requires empirical verification.  The determination of what exactly constitutes a goal is a vital issue.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:38 pm, edited 2 times in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

HIERARCHIES OF GOALS

Post by Dj I.C.U. » Thu Jun 29, 2006 10:17 pm

Virtually all experimental research involves a hierarchy of goals.  As shown in Figure 1, experimenters and participants may focus on successful outcomes for: (a) individual trials or RNG events, (b) groups of trials for within subjects designs, (c) individual subjects, (d) groups of subjects for between subjects designs, (e) individual experiments, (f) the line of research (i.e., groups of experiments), (g) the research institution, (h) their personal careers, and (i) the field of parapsychology.  The existing evidence for goal-oriented psi is evidence that psi sources sometimes focus on goals that are not at the bottom of this hierarchy.
Each of the different levels of the hierarchy of goals involves a higher level of data aggregation.  A majority-vote is an aggregation of random events or guesses.  An experiment is an aggregation of data for trials and subjects.  
Statistical research methods and communication theory are based on the assumption that data aggregation follows certain properties.  The key property is that the reliability or accuracy of estimation improves as more data are aggregated or combined.  This occurs because each event or outcome in the sample is assumed to be an opportunity to measure the effect.  In essence, each measurement or observation is redundant.  However, data aggregation under goal-oriented psi does not result in increased reliability or accuracy of estimation in certain situations because psi basically ignores or bypasses the redundant opportunities.


--- Higher Level          
    - The field of parapsychology        
    - A personal career        
    - The research institution        
    - The line of research (groups of experiments)        
    - Individual experiments        
    - Groups of subjects for between subjects designs        
    - Individual subjects
    - Groups of trials for within subjects designs        
    - Individual trials or RNG events        
--- Lower Level


Figure 1. Hierarchy of Goals for Psi Experiments.  The motivations and goals of psi sources may focus on wanting certain successful outcomes from this hierarchy.

The goal-oriented psi concept implies that this key assumption of communication theory does not apply for outcomes below the psi source's goal on the hierarchy, but does apply for outcomes above the goal.  Under the normal assumptions of communication theory, each RNG event in a majority-vote is a redundant opportunity for psi to operate, and therefore, a majority-vote procedure will enhance the accuracy or scoring rate of psi.  However, Schmidt's (1974) study shows that when the goal of the psi source is the outcome of the majority-vote (or a higher level on the hierarchy), psi bypasses the redundant opportunities and applies directly to the outcome of the goal as a unit.  Thus, majority-vote processes do not lead to enhanced scoring when majority-vote outcomes are the goal of the psi source.  Presumably, if the goal of the psi source was success on each individual event that comprised the majority-vote, then the assumptions of communication theory would apply -- that is, communication theory applies on the hierarchy above the goal of the psi source.  
Similarly, with goal-oriented psi, a key statistical assumption for experimental design and analysis does not apply to outcomes that are below the psi source's goal on the hierarchy, but does apply above the goal.  Under the usual assumptions of statistics, larger sample sizes give more reliable results and greater statistical significance.  With these assumptions, the sample size represents multiple measures of the effect under study and thus is a form of redundancy.  As discussed in a previous paper and in the appendix, the significance level (e.g., z score) is expected to increase linearly with the square root of sample size (Kennedy, 1994).  On the other hand, goal-oriented psi experimenter effects will bypass this type of redundancy and cause a significant result directly on the experimental outcome as a unit, independent of the sample size.  Here too, if the goal of the psi source was the outcome of each trial or subject, then the usual statistical methods would apply.
The critical question of what determines the goal of the psi source has yet to be answered.  Psychological factors such as motivation presumably play a decisive role.  On a commonsense level, the goal is the outcome that the psi source has the strongest motivation to obtain.  Observational theories (Millar, 1978) propose that feedback is a necessary factor, an idea consistent with Schmidt's (1974) study.  However, the conditions that constitute an act of observation are poorly understood and are probably interwoven with the concept of psychological motivation.  A psi source may devote minimal attention or motivation to a physical feedback event if the psi source is focused on a different outcome.  
These questions can be investigated directly by examining where on the hierarchy of goals the usual assumptions for statistical analysis stop (or start) applying.  Schmidt's (1974) study, combined with other majority-vote studies discussed in the next section, provide strong evidence that goal-oriented psi can operate on the level of groups of trials.  These results are evidence that the goal of the psi source(s) was not the lowest level of the hierarchy, but the results do not indicate which of the higher levels was the goal -- for example, the goal could have been the majority-vote outcome or the experimental outcome.  As discussed previously (Kennedy, 1994), the lack of relationship between z score and sample size in meta-analyses of RNG and ganzfeld studies tentatively supports the hypothesis that goal-oriented psi widely applies to the experimental outcome as a whole.  However, a variety of potentially confounding factors must be resolved before conclusions can be made about goal-oriented psi experimenter effects.  A more in-depth evaluation and reporting of the expectations and motivations of experimenters and other research participants is also needed to answer these questions.  Research investigating various levels of data aggregation, feedback, and psychological motivation should provide valuable evidence about the goal-oriented psi hypothesis.
Evidence about goal-oriented psi may come more from the patterns of results for a line of research rather than from a few definitive studies.  This is particularly true for goals on the higher levels of the hierarchy of goals.  Because psychological factors such as motivation presumably dominate the goal-setting process, different researchers can be expected to have different goals and therefore obtain different results on experiments.  Further, the goals may shift over time.   Evidence for the goal-oriented experimenter-effects hypothesis would likely come from consistent patterns of results for individual researchers (including changes over time), combined with consistent differences between experimenters.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:41 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

EFFICIENT OPERATION OF PSI

Post by Dj I.C.U. » Thu Jun 29, 2006 10:18 pm

Several researchers have suggested that internal patterns in majority-vote studies may indicate the efficient operation of psi (Cox, 1974; Kennedy, 1979; Radin, 1990-1991).  The concept of efficient psi operation implies that psi is goal oriented because efficiency is only meaningful when evaluated relative to achieving a goal.  Remarkably consistent internal patterns have occurred in several majority-vote studies that were carried out with the specific goal or expectation that majority-votes would enhance the accuracy of psi.
Six studies found significant results on the majority-vote outcomes, but nonsignificant results for the raw data comprising the majority-votes (see Table 1, derived from Kennedy, 1978; 1979).  This pattern is completely unexpected under the usual assumptions for majority votes.  The z score for the raw data normally should be larger than the z score for the majority-votes because information about the magnitude of the the majorities is lost during the data reduction.   In calculating the z score, the higher scoring rate on the majority-vote outcomes normally is offset by the reduced number of outcomes or sample size.
Higher z scores for the majority-vote outcomes suggest that information was created rather than lost with the majority-vote process.  This is evidence for goal-oriented psi.  In fact, the presence of any pattern in the data for majority-vote studies that does not occur when data are collected for other purposes is evidence for some type of goal-oriented effect.
  The lack of evidence for psi in the raw data suggests that the significant majority-vote outcomes were achieved efficiently. The low z score on the raw trials occurred in the Brier and Tyminski study because the majority-vote hits were concentrated on trials with the smallest majorities.  Cox (1974) attributed his results to the same mechanism but did not present internal analyses to substantiate his position.  Two recent sequential sampling majority-vote studies also found psi focused on the majority-vote trials with the lowest scoring rate on the raw trials (Puthoff, May & Thompson, 1986; Radin, 1990-1991; also see Kennedy, 1994).  As suggested by Radin, this unexpected pattern likely reflects some type of efficient psi operation, but the varying numbers of trials in the majority-votes and the discarding of large amounts of data in the sequential sampling procedure complicate the interpretation.  The patterns that apparently reflect efficient psi have been found in cases when the experimenters did not expect them (Brier and Tyminski, 1970; Puthoff, May & Thompson, 1986) as well as when the experimenters predicted them (Cox, 1974; Radin, 1990-1991); however, in all of these cases the experimenters collected the data with the intention or expectation that majority-votes would lead to signal enhancement.

TABLE 1.
COMPARISON OF Z SCORES FOR MAJORITY-VOTE OUTCOMES AND FOR
THE RAW DATA
Author
Type of Psi Task z for Raw Events z for Majority Votes
Cox, 1965 PK        .43         3.14
Cox, 1966 PK       1.10         2.57
Cox, 1974 PK       1.11         2.48
Morris, 1965 PK       1.50         2.48
Bierman & Houtkooper, 1975 & 1978 PK       1.86         2.42
        2.71a
Brier & Tyminski, 1970 ESP        .58         2.88
 aThis study had two levels of majority votes.

In accordance with the goal-oriented psi hypothesis, the results of majority-vote studies appear to vary with the experimenters' goals at the time of data collection.  When Cox did a post hoc majority-vote analysis on data previously collected without signal enhancement in mind, the z score for the raw trials (z=2.94) was larger than for the majority-vote results (z=2.68), as expected with normal communication theory (see Kennedy, 1978). However, he found a very different pattern on the three subsequent studies noted in Table 1 when he collected data with the specific intent to use majority-votes to enhance psi accuracy.  Likewise, the absence of signal enhancement with majority-votes in Schmidt's (1974) study is in accordance with his expectations at the time, but other experimenters have expected and found increased scoring rates with majority votes.  The hypothesis of efficient goal-oriented psi has profound implications when combined with the hierarchy of goals.  The majority-vote studies indicate that a goal can be achieved with minimal or no psi effect on lower levels of the hierarchy.  If this principle applies at the higher levels of the hierarchy of goals, an experimenter could, for example, obtain the goal of a successful line of research with minimal psi effects for individual experiments.  Also, the experimenter's goal may shift to a higher level on the hierarchy as the research is replicated.  For example, an experimenter may focus on success on the individual subjects in the first study, on the experimental outcome in the second study, and on the line of research in the third study.  If each of these sequential goals is achieved efficiently, the statistical significance of the experimental outcomes would be expected to decline across studies.  
Perhaps the often discussed elusive, capricious nature of psi actually reflects goals being achieved very efficiently, with minimal psi effects on the lower levels of the hierarchy of goals.  Decline effects within and/or across experiments might be expected with shifting goals.  The declines is significance across studies that have frequently been found in parapsychology and  historically have been attributed to declines in enthusiasm by the experimenters (see Kennedy & Taddonio, 1976) may be a manifestation of a more fundamental aspect of psi operation.
The concept of efficient psi provides a methodology to investigate goal-oriented psi in a variety of situations that would otherwise be difficult to investigate.  In the studies using majority-vote methods to enhance psi accuracy, the apparent patterns of efficient psi revealed the nonapplicability of normal statistical assumptions below the psi source's goals on the hierarchy of goals.  This provides evidence for goal oriented-psi and a means to investigate it.  In general, the hypothesis of efficient goal-oriented psi can be tested by exploring the most efficient way to achieve a goal or outcome.  For example, one might explore hypotheses about the most efficient way (i.e., minimal psi effects) to obtain hits on free response trials, differences between two conditions, or successful lines of research.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:44 pm, edited 2 times in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

RELATED TO GOAL-ORIENTED PSI

Post by Dj I.C.U. » Thu Jun 29, 2006 10:18 pm

This appendix was prepared in response to a request by a reviewer that I discuss the relationship between goal-oriented psi as described in this paper and other writings on task complexity and information processing.  This discussion is presented as an appendix because it is a technical digression from the primary purpose of the paper.  Four fundamental aspects of information processing that are relevant to task complexity and goal-oriented psi are summarized below, followed by a discussion of how they apply to several topics relating to psi information processing.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:44 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

Redundant Opportunities for Psi

Post by Dj I.C.U. » Thu Jun 29, 2006 10:19 pm

The presence of repeated or redundant opportunities for psi to operate is a form of information processing that is a key assumption for majority-vote procedures and for statistical experimental methodology.  Normal statistical methods are based on the assumption that the aggregation of data from multiple or repeated measurements of an effect will lead to increased reliability or accuracy of estimation.  Note that the presence of this redundancy does not change the a priori probability of a successful outcome.  For example, the a priori probability of a hit on each trial in Schmidt's (1974) PK study was .5, whether the trial was a majority-vote or single-event trial.  Likewise, from the perspective of goal-oriented psi experimenter effects, the a priori probability of a successful outcome is .05 (or the alpha significance level of the experiment) independent of the sample size.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:45 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

Information about the Psi Task

Post by Dj I.C.U. » Thu Jun 29, 2006 10:19 pm

Blind PK and blind matching are cases where the amount of information to achieve a psi task appears to vary if measured relative to normal sensory information processing for the task.  These are special cases of the general principle that success on PK tasks does not depend on the degree to which the subject understands the details of the random process being influenced.  In an extension of this point, Schmidt's (1987) proposed the "equivalence hypothesis" that the magnitude of psi effects does not depend on the physical processes or details of truly random processes if the observation or feedback is psychologically identical.  As with redundancy, the a priori probability of a hit is the same for the different conditions or tasks.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:46 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

A priori Probability of Success

Post by Dj I.C.U. » Thu Jun 29, 2006 10:20 pm

In the quantitative, mathematical use of the term, information refers to the reduction of uncertainty.  A task with six equally likely outcomes has a greater amount of uncertainty than a task with two equally likely outcomes.  A priori probability is a key aspect of information processing and has been recognized as an important means to gain insight into the psi process (Kennedy, 1978; Scott, 1961; Thouless, 1935).  
The approximately equal deviations in high and low aim psi tasks has long been recognized as evidence that psi involves partial information on a relatively large number of trials rather than complete information on a few trials (Thouless, 1935).  In general, if psi produces a strong effect on a few trials, then a psi task with a very small a priori probability of a hit due to chance is the optimal means to obtain highly significant results.  For example, if ESP provides complete information about the target on 10 of 100 trials, then a task with a probability of a hit due to chance on each trial of .01 (or .000001) will give much more significant results than a task with a probability of .5.  The available data generally do not appear consistent with this model; however, the role of a priori probability has received very little research effort, relative to its importance (Kennedy, 1978). I believe that the partial information characteristics of psi have profound implications that are not yet understood.  The goal-oriented psi hypothesis may resolve some of the questions about partial information; however, how psi processes information for low-probability events remains an important question.  This question is particularly important for possible goal-oriented psi experimenter effects because the goal of achieving a significant result on an experiment has a relatively low a priori probability due to chance (usually .05).
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:47 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

Information Transmitted or Utilized.

Post by Dj I.C.U. » Thu Jun 29, 2006 10:20 pm

The quantitative amount of information transmitted on a psi task is closely related to the resulting chi-square value (Schmidt, 1970; Timm, 1973), and therefore, is monotonically related to the statistical significance level of the outcome and to other common statistics such as (absolute value of) z and t.  The chi-square value divided by sample size is a measure of the average information transmitted or utilized per trial.  
The assumption that the average information transmitted per trail is constant as more data are aggregated is the basis of normal statistical research methodology.  The chi-square value or total information transmitted is expected to increase as the sample size increases.  This assumption leads directly to the expectation that the z score (which is the square root of chi-square) is linearly related to the square root of sample size.  Thus, the redundancy factor and amount of information transmitted are interwoven.
The relationships among information transmitted, a priori probability of a hit, and partial information on individual trials are important but basically unexplored areas.
Last edited by Dj I.C.U. on Thu Jun 29, 2006 10:47 pm, edited 1 time in total.

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

Precognitive Timing

Post by Dj I.C.U. » Thu Jun 29, 2006 10:20 pm

Efforts to use precognitive timing tasks to investigate task complexity and goal-oriented psi are primarily investigating the a priori probability of a hit factor.  In precognitive timing tasks, a pseudorandom number generator generates numbers at a rate much faster than human reaction time. A random outcome or trial is selected when the subject pushes a button.  Vassey (1985, 1986) proposed using precognitive timing tasks to study goal-oriented psi by varying the number of pseudorandom outcomes collected for each button press or timing decision.  He suggested that with goal-oriented psi the scoring rate on the pseudorandom outcomes would be independent of the number of outcomes collected for a timing decision.  However, the a priori probability of obtaining a given scoring rate depends on the number of outcomes.  For example, if 100 binary  
(P=.5) pseudorandom outcomes are collected from one timing decision (button press), the a priori probability of getting a 60% or higher scoring rate on the 100 outcomes by chance is .028. But, if 10 pseudorandom outcomes are collected for one timing decision, the probability of getting a 60% or higher scoring rate .38.  Thus, this strategy compares conditions that have different a priori probabilities and assumes that goal-oriented psi operates independent of the a priori probability of a hit.  This assumption requires that more information is transmitted or utilized for cases with small a priori probabilities of a hit, and, as noted in the previous sections, does not appear consistent with available data in other contexts.
The results of a precognitive timing experiment did not support Vassey's assumptions for goal-oriented psi.  Vassey (1986) used the precognitive timing methodology to see if either the scoring rate or information transmitted (significance level) was constant for different numbers of pseudorandom events per timing decision.  Significant evidence for psi was obtained with one pseudorandom outcome per timing decision, but the results were mixed for two outcomes per decision and nonsignificant for three, four, and five outcomes per decision.  Thus, the data did not support either model.
The precognitive-timing methodology is basically a complicated means to investigate the a priori probability of a hit factor.  However, this methodology also offers a means to investigate the role of multiple feedback events for one timing decision and the related issue of the amount of psychological involvement in one timing decision

User avatar
Dj I.C.U.
Posts: 2107
Joined: Thu Mar 02, 2006 3:00 pm

PK as a Force Versus Precognitive Timing

Post by Dj I.C.U. » Thu Jun 29, 2006 10:50 pm

May et al. (1985) proposed using similar strategies as an elegant means to investigate the old question of whether PK operates with a force-like mechanism that biases the random process or with a precognitive timing mechanism that selects favorable random fluctuations of the random process.  If PK is actually a precognitive-timing mechanism, then PK tasks using true RNG's will have the same properties as precognitive-timing tasks with pseudorandom number generators.  Because psi can operate only on the timing decisions for the precognitive model, but on each RNG event for the force-like model, the number of opportunities for psi to operate (redundancy) is very different with the two models.  
Based on the assumption that the amount of information utilized is the same for each timing decision, May et al. hypothesized that the precognitive timing model will result in the z score for the RNG outcomes from one timing decision being  unrelated to the number of RNG outcomes collected for the decision.  On the other hand, if psi operates as a force biasing the RNG outcomes, then each RNG event would be an opportunity for psi to operate and the z scores will increase with the square root of the number of RNG outcomes for each timing decision.  
The initial meta-analysis results reported by May et al., (1985) were consistent with precognitive timing hypothesis and were significantly different from the results expected with the hypothesis of a force-like mechanism.  
Unfortunately, these results are not distinguishable from goal-oriented psi if the goal is the outcome of each timing decision or higher on the hierarchy of goals.  This strategy is also confounded by the assumption that same amount of information is utilized per timing decision, independent of the number of trials, amount of feedback, and related factors of psychological involvement and motivation.  As noted in the previous section, Vassey's (1986) direct investigation of this assumption failed to support it.  In addition, the analysis of May eta l. has unresolved methodological issues, particularly concerning the simulation data that constituted almost 30% of the data in the analysis (Kennedy, 1994).

Locked

Return to “Psychic”

Who is online

Users browsing this forum: No registered users and 11 guests