Theory E²: Working with Entrepreneurs in Closely-Held Enterprises: XIII. Assessment in the Enterprise Cycle (Part Two)
Many times, we find that program assessments are unsatisfactory, not because they fail to determine whether an outcome has been achieved or an impact observed, but rather because they tell us very little about why a particular outcome or impact occurred. At the end of a program we may be able to determine that it has been successful. However, if we do not know the reasons for this success, that is if we have not fully appreciated the complex dynamics operating within and upon this program unit, then we have little information that is of practical value. We have very few ideas about how to sustain or improve the program, or about how to implement a successful program somewhere else. All we can do is to continue doing what we already have done. This choice is fraught with problems, for conditions can change so rapidly that programs that were once successful may no longer be so.
Michael Quinn Patton (1990) is among the most influential evaluators in his emphasis on the pragmatic value inherent in a diagnostic focus. Coining the phrase “utilization-focused evaluation,” Patton (1990, p. 105) suggests that:
Unless one knows that a program is operating according to design, there may be little reason to expect it to produce the desired outcomes. . . . When outcomes are evaluated without knowledge of implementation, the results seldom provide a direction for action because the decision maker lacks information about what produced the observed outcomes (or lack of outcomes). Pure pre-post outcomes evaluation is the “black box” approach to evaluation.
A desire to know the causes of program success or failure may be of minimal importance if an evaluation is being performed only to determine success or failure or if there are no plans to continue or replicate the program in other settings. However, if the evaluation is to be conducted while the program is in progress, or if there are plans for repeating the program somewhere else, evaluation should include appreciative procedures for diagnosing the causes of success and failure.
What are the characteristics of a diagnostic assessment that is appreciative in nature? First, this type of evaluation necessarily requires qualitative analysis. (Patton, 1990, Chapter 2 and 3) Whereas evaluation that focuses on outcomes or that is deficit-oriented usually requires some form of quantifiable measurement, diagnostic evaluation is more often qualitative or a mixture of qualitative and quantitative. Numbers in isolation rarely yield appreciative insights, nor do they tell us why something has or has not been successful. This does not mean that quantification is inappropriate to diagnostic evaluation. It only suggests that quantification is usually not sufficient. Second, the appreciative search for causes to such complex social issues as the success or failure of a human resource development program requires a broad, systemic look at the program being evaluated in its social milieu. Program diagnosis must necessarily involve a description of the landscape. The program must be examined in its social and historical context.