Theory E²: Working with Entrepreneurs in Closely-Held Enterprises: XIII. Assessment in the Enterprise Cycle (Part Two)
The clients may need to be informed of the differences between the experimental and control groups before signing up, based on an understandable concern for their welfare. If this is the case, then a subset of the clients from the experimental and control groups can be paired on the basis of specific characteristics (e.g. motivation, socio-economic status, intelligence) that might affect comparisons between the self-selected groups. The two subgroups that are paired thus become the focus of outcome determination evaluation, while the remaining participants in the two groups are excluded from this aspect of the overall program evaluation.
A “rotational/counterbalanced design” also can be used in place of a classic experimental design, especially if no control group can be obtained and if the evaluators are particularly interested in specific aspects or sequences of activities in the program being evaluated. The rotational/counterbalanced design requires that the program be broken into three or four units. One group of program participants would be presented with one sequence of these units (e.g. Unit l, Unit 3, Unit 2), a second group of participants being presented with a second sequence (e.g. Unit 3, Unit 2, Unit 1) and so forth. Ideally, each possible sequence of units should be offered. Outcomes are assessed at the end of each unit.
An entrepreneur who makes use of this design will obtain substantial information about program outcomes, as well as some indication about interaction between program activities. The rotational/counterbalanced design might be used successfully in the assessment of a new set of training modules or a new public relations strategy. It would yield information not only about the overall success of the new set of modules or strategy but also suggest which sequence of modules or press releases is most effective. Campbell and Stanley describe a variety of other designs, indicating the strengths and weaknesses of each. They show that some designs are relatively more effective than others in certain circumstances, such as those involving limited resources and complex program outcomes. In addition, they suggest alternatives to the classic experimental design for situations in which that design may be obtrusive to the program being evaluated or otherwise not feasible.