Theory E²: Working with Entrepreneurs in Closely-Held Enterprises: XIII. Assessment in the Enterprise Cycle (Part Two)
A similar case can be made for the assessment of a mental health clinic’s performance. The patients at Clinic Gamma may be rated higher at a mental health status review than patients from Clinic Delta. However, we don’t know if this can be attributed to differences in the severity of mental health problems being treated by the two clinics or to other extenuating circumstances. There may be differing socio-economic levels among the patients, differing levels of funding or staff support for the two clinics, or differing criteria among those who rate mental health status at the two clinics. Simple outcome measures are rarely either accurate or fair. They certainly should be avoided in making decisions regarding program continuation, expansion, or modification.
Fortunately, there are ways in which to assess program outcomes accurately and fairly, without having to engage a pure experimental design that may be neither feasible nor ethical. Donald Campbell and Julian Stanley (1966) have described a set of “quasi-experimental” designs that allow one to modify some of the conditions of the classic experimental design without sacrificing the clarity of results obtained. Campbell and Stanley’s brief monograph on experimental and quasi-experimental designs is a classic in the field (see also Isaac, 1979). Any program evaluator who wishes to design an outcome determination evaluation should consult this monograph. Three of the most widely used of these quasi-experimental designs are “time series,” “nonequivalent control group design” and “rotational/counterbalanced design.”
Campbell and Stanley’s “time-series” design requires that some standard measure be taken periodically throughout the life of the organization, for example, rates of attrition in a college, average length of stay in a hospital, percentage of product rejection in a production line. If such a measurement relates directly to one of the anticipated outcomes of the program being evaluated, there may be a significant change in this measurement. This change will occur after the program has been in place for a given amount of time among those units of the organization that are participating in the program. With this design, a sufficient number of measures must be taken before and after the program is initiated in order to establish a comparative base. At least three measures should be taken before and two measures after program initiation.
The second quasi-experimental design, the “nonequivalent control group design” will in some cases help the evaluator to partially overcome the Hawthorne effect among experimental group members and the sense of inferiority and “guinea pig” status among control group members. Rather than randomly selecting people into an experimental or control group, the evaluator can make use of two or more existing groups. Two therapeutic programs, for instance, that offer the same type of services might be identified. Clients would select one or the other program on the basis of time preference, convenience of location, etc. It is hoped that these reasons would function independently of the outcomes being studied in the evaluation. One of the therapeutic programs would be given the new program, while the other (the control group) receives the services already provided by the agency.