You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Experimental Controls

For every confounded variable, there is a potential control. In one sense of the term, to control a variable is precisely to remove it as a confounded variable. This is called methodological control; it means a variable is "ruled out" as a confounded variable by the logic of the experiment.

What does the word "control" mean when used in the context of experimental design?

Most beginning students take the word control literally. They visualize a laboratory scientist holding some variable such as temperature steady. But physical control over variables is not always necessary.

Methodological control can be achieved, sometimes, by merely measuring a variable. If one can demonstrate there is no difference between groups on that variable, the variable is removed as a confound.

How could the student have eliminated volume as a confounded variable in his research?

On some occasions, an experi­menter controls a variable by holding it steady. In the example on the previous page, the student could have eliminated his confounded variable (volume) by making sure the his recordings were recorded at exactly the same volume level.

Placebo Effects

Believing in something often helps to make it work. This is called the placebo effect. A placebo (pronounced pluh-SEE-bo) is literally a "pleasing thing."

The word placebo sometimes refers to a sugar pill or fake medicine. However, the phrase placebo effect has come to mean much more. To most psychology researchers, the phrase now refers to any situation in which a person's belief in a treatment contributes to the effect of the treatment (Critelli & Neumann, 1984).

Placebo effects, defined this way, are not imaginary. They are genuine changes produced by a person's knowledge or belief. For example, if you wore a certain cologne which made you feel more attractive, you might act more attractive and be more attractive, even if the cologne had no effect.

What are placebo effects? Are they imaginary?

When testing the effects of a new medicine, researchers must give the control group a placebo: a realistic-looking fake that contains no active ingredient. The experimental group gets the real medicine, the control group gets the placebo.

Both groups think they are getting a real medicine. This way the researchers can tell if the medicine has any effect beyond the expected placebo effect, which should be present equally in both groups.

How is the placebo effect controlled in research?

Students have no trouble understanding this idea, yet many still miss this question:

1) How do you control the placebo effect?

   a) give the control group an experimental treatment

   b) give the experimental group a "sugar pill"

   c) create a placebo effect in the control group

   d) make sure nobody gets a placebo

The correct answer is "c." To some students, that sounds wrong. If it sounds wrong to you, study this concept until you understand it. A researcher tries to create a placebo effect in the control group to equalize the placebo effect in the two groups.

Remember a confounded variable is an unwanted difference between groups. An experimental control is a procedure that removes such an unwanted difference, sometimes by holding a variable steady, sometimes by equalizing it between groups.

If both groups think they are receiving a genuine treatment, then belief is not a difference between groups. Both groups should experience an equal placebo effect. That equalizes it between the groups (controls the variable of belief) removing belief in treatment as a confounded variable.

An two-group experimental design in which subjects do not know whether they receive a placebo treatment or a real treatment is called a single-blind design. The subjects are blind to which treatment they are receiving.

What is a single-blind design?

The experimenter may be aware of which they are getting. An experiment in which both subjects and experimenters are kept "blind" about which group gets the real treatment is called a double-blind design (discussed below).

Experimenter Effects

Earlier we discussed measurement effects, defined as an effect on the data due to the act of measurement itself. One example of a measurement effect is the observer effect. This is an unwanted influence on the results due to the presence of an observer.

Closely related is the concept of an experimenter effect. This is an unwanted effect on subjects due to actions, expectations, or presence of an experimenter. Demand characteristics of an experimental situation can also create a measurement effect, when they affect the data.

What is an "experimenter effect"?

Experimenters may accidentally treat subjects in experimental and control groups differently. This can happen, for example, if they know one group is getting a real treatment and the other is getting a fake or placebo treatment.

Such expectancy effects are very powerful. One scientist–Robert Rosenthal of Harvard–spent over 30 years investigating them.

Rosenthal's findings are quite remarkable. In one famous example, a group of graduate students was told that a set of rats was "brighter" than another and expected to learn faster. The student researchers verified this, finding that the brighter rats learned faster.

Rosenthal had actually chosen the two groups of rats randomly, because he was studying expectations, not rat learning. The graduate students expected one group to do better, and evidently that was enough to influence the results.

Perhaps the graduate students were a little quicker to hit the buttons on their timers when they expected rats to be bright. Perhaps the rats themselves were somehow influenced by the expectations of the experimenters. They might have been handled with more affection and respect when the students thought they were genius rats; who knows.

How did Rosenthal demonstrate expectancy effects in graduate students?

The explanation for expectancy effects is not always obvious. What Rosenthal showed, again and again, was that such effects do occur.

Rosenthal branched out from laboratory settings and showed expectancy effects also occurred in a wide variety of natural settings. He demonstrated them in hospitals, courtrooms, and schools.

Rosenthal's most famous study involved "gifted students" who were actually randomly chosen students. He administered an IQ test then gave teachers fake data showing some of the students were gifted and could be expected to excel during the year.

Sure enough, those students improved more than others in their academic performances during the year. This led to Rosenthal's book, Pygmalion in the classroom.

The name was a reference to an ancient Greek legend. A sculptor fell in love with the beauty of a statue he had created, inspiring the gods to bring her to life. Similarly, Rosenthal argued, teachers who had high positive expectations for students could "bring them to life."

The Double-Blind Design

How can unwanted experimenter effects be prevented? The solution is an experimental design in which the researcher does not know which subjects are receiving a genuine treatment.

This is called a double-blind design. The subjects do not know which group they are in, and neither does the experimenter or the person collecting data. All are kept blind about which group subjects are in, until the data-collection phase of the research is over.

What is a double-blind design?

Perhaps double-blind experimental methodology should be called triple-blind. Experimenter effects must be eliminated during both the treatment and data collection phases.

In a double-blind experiment neither (1) the subject, nor (2) the person giving the treatment, nor (3) the person collecting data knows which treatment a subject received. After data is all collected, group member­ship is revealed and the data can be analyzed to see if there are differences between the groups.

Why should double-blind methodology perhaps be called "triple blind"?

This "blind" methodology is appro­priate whenever knowledge of who is in the experimental or control group might alter the data. As Rosenthal showed, expectancy effects are widespread and pervasive. Double-blind procedures should be employed whenever possible.

An example of double-blind research is a study of subliminal learning tapes (Greenwald, Spangenberg, Pratkanis, and Eskenazi, 1991). These cassette tapes were quite popular in the 1980s, often advertised in magazines and newspapers. The tapes supposedly programmed the unconscious mind by giving instructions at a low volume, too quiet to be heard, while a person was resting or sleeping.

The researchers gave tapes to subjects who used them for a month. The tape label indicated the tape was for (1) self-esteem improvement, or (2) memory improvement. Half the tapes were deliberately mislabeled. Neither the researchers nor the subjects knew which tapes had the wrong label until after data was collected.

How did psychologists use double-blind methodology to test the effectiveness of subliminal learning tapes?

Many of the subjects claimed the tapes helped them in self-esteem or memory improvement. However, these improvements corre­sponded to the label, not to the actual contents of the tape. (This type of research is sometimes called a false label experiment.)

In other words, a student who received a self-esteem improve­ment tape, mislabeled as a memory improvement tape, would typically report improvements in memory, not self-esteem. This showed that subliminal self-help tapes had no real effect beyond a placebo effect.

Experimenter effects are relevant in any experimental comparison. Double-blind methodology should be used whenever people expect a particular outcome.

In general, when should double-blind methods be used?

For example, the editors of Stereo Review found that expert listeners typically rated the sound of a high-priced CD player as superior to that of a lower-priced CD player. However, when double blind methods were employed, the expert listeners said all the CD players sounded the same (Master, 1986).

The results of poorly controlled pilot studies or informal clinical observations may lead to great excitement. When proper controls such as double-blind procedures are put into place, the exciting finding may disappear.

What often happens, after an exciting pilot study?

This is exactly what happened with a study that explored the effects of hemodialysis (the blood-cleansing procedure) on schizophrenia, a serious mental illness. Initial studies indicated promising effects, but when double-blind methods were employed in a full-scale clinical trial, there was no effect at all (Carpenter, Sadler, Light, Hanlon, and Kurland, 1983).

If you do not happen to know about hemodialysis, it requires a person to be hooked up to a large machine while the person's blood is passed through the machine (imitating the function of the kidneys). To do a double-blind study, it was necessary to give half the subjects a realistic simulation of hemodialysis.

To make the design double-blind, it was also necessary for the people evaluating the outcomes to be ignorant of which participants received the real therapy or the placebo therapy. Those were elaborate controls to institute, but it was worth it, to find out that hemodialysis did not, in fact, help in the treatment of schizophrenia.

Such failures can be discouraging. But such "failures" are the reason science marches forward. Theories are not accepted just because a treatment appears to work in the field or in a clinical setting.

Scientists want to know why something works, so after a promising field trial, they put controls into place to eliminate placebo effects and experimenter effects. Sometimes the effect disappears. If that happens, we are better off knowing about it, rather than believing in a treatment that works only through a placebo effect.

---------------------
References:

Carpenter, W. T., Sadler, J. H., Light, P.D., Hanlon, T.E., & Kurland, A. A. (1983) Schizophrenia and dialysis. Artificial Organs, 7, 357-364.

Critelli, J. & Neumann, K. (1984). The placebo: Conceptual analysis of a construct. American Psychologist, 39, 32.

Greenwald, A. G., Spangenberg, A. G., Pratkanis, A. R., & Eskanazi, J. (1991). Double-Blind Tests of Subliminal Self-Help Audiotapes. Psychological Science, 2, 119-122.

Master, I. G. (1986, January). Do all CD players sound the same? Stereo Review, p.51-57.


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.