RESEARCH DESIGNS
Thomas Bevins
Summer 1999

Research Designs are platforms to use to explore new knowledge in order to better understand phenomena, clarify explanations and identify causative factors. Although there are no real rules for choosing a design; one must realize the consequences for choosing one design over the other. One should choose the design that best attempts to address the conceptual issues presented.

Some questions that would help you decide which quantitative design is most appropriate for your study include:

  1. How much do you know about the variables of interest?
  2. Are you manipulating the levels of the independent variable?
  3. How many independent variables are being tested?
  4. How many levels do each independent variable have, and are these levels experimental or control?
  5. How many groups of subjects are being tested?
  6. How will subjects be selected?
  7. Can subjects be randomly assigned to groups?
  8. Is pretest data being collected?
  9. Are you interested in examining differences or similarities between groups?
  10. How often will observations of responses be made?
  11. What is the temporal (time) sequence of interventions and measurements?
The research design gives backbone structure to a study. There are several concepts to understand when choosing a research design. These are summarized below:
Concepts to Consider when designing a Research Study
QUANTITATIVE RESEARCH DESIGNS
Experimenatl v. nonexperimental
 
EXPERIMENTAL DESIGN NON EXPERIMENTAL DESIGN
A. Manipulate variables to bring about effect  A. Observe variables and effects
B. All relevant variables have been defined so that they can be manipulated, controlled and studied B. Used to identify/measure/describe variables and/or determine relationships for further (experimental) study
C. Random selection and random assignment occur to improve control C. Manipulation, control and randomization are lacking.
EXPERIMENTAL DESIGN

The purpose of an experimental design is to provide a structure for evaluating the cause-and-effect relationship between a set of independent and dependent variables.

Some of the elements of an experimental design:

  1. Manipulation - the researcher manipulates the levels of the independent variable. This usually means that you are looking at the effect of some treatment on one group of subjects (the treatment group), and comparing that to another group of subjects who do not receive the treatment (the control group). The type of treatment here is the independent variable that gets "manipulated". The idea of manipulation is that the researcher "manipulates" by assigning some subjects to the treatment group, and the other subjects to the control group. The researcher does not have to use a control group, the design may incorporate two or more "treatments", or various levels of the same treatment that are compared. It is the effect of this "manipulation" that is measured to determine the result of the experimental treatment. Another variation on the control group is the "attention control group".  This group would get some "neutral" experimental attention/treatment, but not the treatment variable being studied.  This allows the researcher to look at three groups (experimental, attention control, and "silent" control) and better control for the Hawthorne effect.
  2. Control - the researcher incorporates elements of control so that the evidence supporting a causal relationship can be interpreted with confidence. Using a control group is only one aspect of control. Control is acquired through manipulation, randomization, the use of control groups, and methods to handle extraneous variables. (more on control below).
  3. Randomization - subjects are randomly assigned to at least two comparison groups.
An extraneous variable is a variable that you did not initially intend to include in your design, but this variable might have an influence on your study in a way that would invalidate the results of the study. The researcher attempts to exert control over these extraneous variables in one of three ways:
  1. Selective control - the use of randomization (see above). For instance: if gender is an extraneous variable, then by randomly assigning subjects to groups, the number of males and females in each group should be evenly distributed, and the variable gender should not effect the outcome for one group versus the other.
  2. Physical control - control of an extraneous variable by making it a constant.  For instance: if gender is an extraneous variable, then only study females. This way the variable gender will not effect the outcome of one group versus the other. The disadvantage of this approach is that it limits your ability to generalize your results. If you only study females, you do not know how this treatment will effect males.
  3. Statistical control - include the extraneous variables in the design. For instance: add gender as another independent variable in your study. This can be a very powerful way to control the effect of an extraneous variable like gender. You will actually analyze the effect of the variable, and know how gender effects the outcome of your study.
Cause and Effect
  1. the causal variable and the effect variable must be associated with each other
  2. the cause must precede the effect
  3. the relationship/association must not be explainable by another (extraneous) variable
Advantages of an experimental design: Disadvantages of an experimental design: Be able to compare and contrast the following types of experimental designs:
    True experimental design: including random selection (random sampling), pretest/posttest, random assignment, manipulation of the levels of the independent variable(s), including a control group.
    Posttest-only design: (after-only design): you must assume that randomization assures pre-experimental group equivalence.
    Solomon four-group design: combines the true experimental and the posttest only.  Allows you to evaluate the effect of the pretest on the posttest scores, and any interaction betwen the test and experimental condition.
    Factorial designs:  Allows the researcher to examine the effects of one or more intervention on different factors or levels of variables in the study.  Used for statistical control.  Tends to increase sample size because you want to have enough subjects in each "cell" of the design.
    Counterbalanced (crossover) designs:  when more than one intervention (treatment) is used, and you want to know the effect of manipulating the order in which the treatments are given.

DePoy gives three criteria that need to be met to determine that a true experimental design is appropriate:

  1.     there is sufficient development of theory to warrant the proposition of causality
  2.     you have asked a causal quantitative question
  3.     conditions exist, and legal and ethical issues allow for the random selection of subjects (random sampling), random assignment to groups, use of a  pretest/posttest, and manipulation of the levels of the independent variable(s), including a control group
The basic characteristics distinguishing a true experimental design from a quasi-experimental or nonexperimental design are randomization and comparison of groups. If you are trying to distinguish between experimental and quasi-experimental look to see if there is random assignment of subjects to groups, and there is a comparison of these groups. Comparison: is there some experimental group getting an experimental treatment, and another group getting a different experimental treatment, or a group getting no experimental treatment (control group)?
 
QUASI EXPERIMENTAL DESIGNS

It is not always possible to implement a design that meets the three criteria of a true experimental study (manipulation, control and randomization). Quasi-experimental designs differ from experimental designs because either there is no control group or randomization cannot occur.

Types of quasi experimental designs:

1. Nonequivalent control group design:

2. After only non equivalent control group design: 3. Time series design:
PRE-EXPERIMENTAL DESIGNS

One-shot case study: there is manipulation in that a "treatment" (independent variable) is given, and the dependent variable is then measured, but there is no randomization, and essentially no control.
Pretest-posttest design:  no randomization, not much control.  Like a one-shot case study with a pretest.
Static group comparison: still no randomization, but maybe a little more control.  You have a "control group", but those subjects are non-equivalent, and there is no pretest to see how equivalent they might have been. 

NONEXPERIMENTAL DESIGNS

These designs are used in situations where manipulation of an independent variable, control or randomization are not involved. These designs are focused to describe and measure independent and dependent variables. They are sometimes called descriptive research designs. Nonexperimental research does not prove causality. The goal is to describe phenomena and explore and explain relationships between variables.

Types of non experimental designs:

1. Descriptive/Exploratory Survey Studies-

2. Correlational Studies- 3. Ex Post Facto Studies - 4. Prediction Studies - 5. Cross-sectional Studies - 6. Longitudinal Studies - Cause-and-Effect Case Studies - References:
DePoy, E. & Gitlin, L. (1998).
Introduction to research:  Understanding and applying multiple strategies. St. Louis:  Mosby.
and
LoBiondo-Wood, G., & Haber, J. (1998).
Nursing research: Methods, critical appraisal, and utilization (4th ed.).  St. Louis: Mosby.