SPED 8013 | Chapter 10: Planning and Evaluating Applied Behavior Analysis Research

Importance of Individual Subject

  • Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors
  • Contrasted with groups-comparison approach

Groups-Comparison Experiment

  • Randomly selected pool of subjects from relevant population
  • Divided into experimental and control group
  • Pretest, application of independent variable to experimental group, and posttest

Group Data Not Representative of Individual Performance

  • Individuals within a group could stay the same or deteriorate, while improvement of others could make it appear as overall average improvement
  • To be most useful, treatment must be understood at an individual level

Group Data Masks Variability

  • Hides variability that occurs within and between subjects
  • Statistical control should not be a substitute for experimental control
  • To control effects of any variable, must either hold it constant or manipulate it as an independent variable

Absence of Intrasubject Replication

  • Power of replicating effects within individuals is lost
  • When group results don’t represent individuals, should supplement the data with individual results

Importance of Flexibility in Design

  • An effective researcher must actively design each experiment so that it achieves its own unique design
  • Good experimental design is any independent variable manipulation that produces data that convincingly addresses the research question
  • The book presents analytic tactics in design form

Experimental Design

  • Often designs entail a combination of analytic tactics
  • Infinite number of possible designs with different combinations
  • Use ongoing evaluation of data from individuals to employ baseline logic of prediction, verification, and replication

Internal Validity

  • Experiments that demonstrate clear functional relations have a high degree of internal validity
  • Experimental control refers to all relevant variables
  • Steady state responding as evidence
  • Confounding variables are threats to internal validity

Subject Confounds

  • Maturation: changes in subject over course of experiment
  • Repeated measurement controls and detects uncontrolled variables

Setting Confounds

  • Studies in natural setting are more prone to confounding variables than in controlled laboratories
  • If change in setting occurs, should then hold new conditions constant until steady state responding is observed

Measurement Confounds

  • Observer drift or bias
  • Keeping observers naive to expected outcomes can reduce observer bias
  • Must maintain baseline conditions long enough for reactive effects to run their course and then obtain stable responding

Treatment Integrity

  • Extent to which the independent variable is implemented or carried out as planned
  • Low treatment integrity makes it very difficult to confidently interpret experimental results
  • Treatment drift: when application of independent variable in later phases differs from original application

Precise Operational Definition

  • A high level of treatment integrity requires a complete, precise operational definition of treatment procedures
  • Define in 4 dimensions: verbal, physical, spatial and temporal

Simplify, Standardize, and Automate

  • Simple, precise treatments are more likely to be consistently delivered
  • Simple, easy-to-implement techniques are more likely to be used and socially validated
  • Experimenters should standardize as many aspects as possible and practical

Training and Practice

  • Train and provide practice for individual who will conduct the experimental sessions
  • Could provide a detailed script, verbal instructions, modeling, or performance feedback

Assessing Treatment Integrity

  • Collect treatment integrity data to measure how the actual implementation of the conditions matches the written methods
  • Observation and calibration give the researcher the ongoing ability to use retraining and practice to ensure high treatment integrity
  • Reduce, eliminate, or identify the influence of as many potentially confounding variables as possible

 Social Validity

  • Includes the social significance of the target behavior, the appropriateness of the procedures, and the social importance of the results
  • Usually assessed by asking direct and indirect consumers
  • Consumer satisfaction

Social Importance of Behavior Change Goals

  • To determine socially valid goals:
    • Assess the performance of persons considered competent
    • Experimentally manipulate different levels of performance to determine which produces optimal results
  • Methods for assessing outcomes:
    • Compare subject’s performance to a normative sample
    • Use standardized assessment instrument
    • Ask consumers to rate social validity of performance
    • Ask experts to evaluate subject’s performance
    • Test subject’s new performance in natural environment

Social Importance of Interventions

  • Several scales and questionnaires for obtaining consumers’ opinions on acceptability of interventions
  • Examples:
    • Intervention rating profile
    • Treatment Acceptance Rating Form

External Validity

  • Degree to which a functional relation in an experiment will hold under different conditions
  • A matter of degree, not all-or-nothing
  • Those with greater degrees of generality, make greater contribution to applied behavior analysis

External Validity and Applied Behavior Analysis

  • Generality of findings in ABA is assessed, established, and specified through replication of experiments
  • Two major types of scientific replication:
    • Direct Replication
      • Duplicates exactly the conditions of an earlier experiment
      • Intrasubject direct replication: Uses the same subject to establish reliability of functional relation
      • Intersubject direct replication: Uses different but similar subjects to determine generality
    • Systematic Replication
      • Researcher purposefully varies one or more aspects of earlier experiment
      • Can demonstrate reliability and external validity of earlier findings
      • Can alter any aspect: subjects, setting, administration of independent variable, or target behaviors

Evaluating Applied Behavior Analysis Research

  • Questions to ask in evaluating the quality of research in applied behavior analysis fall under 4 categories:
    • Internal validity
    • Social validity
    • External validity
    • Scientific and theoretical significance

Theoretical Significance and Conceptual Sense

  • Evaluate a study in terms of its scientific merit
  • Look at its contribution to the advancement of the field
  • “knowledgable reproducibility”

Need for More Thorough Analyses

  • Need conceptual understanding of the principles that underlie successful demonstrations of behavior change
  • Readers should consider the technological description, the interpretation of results, and the level of conceptual integrity in experimental reports