Theory E²: Working with Entrepreneurs in Closely-Held Enterprises: XIII. Assessment in the Enterprise Cycle (Part Two)

Theory E²: Working with Entrepreneurs in Closely-Held Enterprises: XIII. Assessment in the Enterprise Cycle (Part Two)

In order to achieve this comparison between a group that has participated in a program, called the “experimental” group, and a group that hasn’t participated in this program, called the “control” group, several research design decisions must be made. Most evaluators try to employ a design in which people are assigned randomly to the experimental and control groups, and in which both groups are given pre- and post-program evaluations that assess the achievement of specific outcomes. Typically, the control group is not exposed to any program. Alternatively, the control group is exposed to a similar program that has already been offered in or by the organization. In this situation there should be ideally at least two control groups, one that receives no program services and the other that receives an alternative to the program being evaluated.

While this experimental design is classic in evaluation research, it is difficult to achieve in practice. First, people often can’t be assigned randomly to alternative programs. Second, a control group may not provide an adequate comparison for an experimental group. If customers in a control group know that they are “controls,” this will influence their attitudes about and subsequently their participation in the program that serves as the control. Conversely, an experimental group is likely to put forth an extra effort if it knows its designation. This is what is often called “The Hawthorne Effect.” It may be difficult to keep information about involvement in an experiment from participants in either the experimental or control group, particularly in small organizations. Some people even consider the withholding of this type of information to be unethical.

Third, test and retest procedures are often problematic. In assessing a customer’s or worker’s attitudes, knowledge or skills before and after a program, one cannot always be certain that the two assessment procedures actually are comparable. Furthermore, if there is no significant change in pre- and post-program outcome measurements, one can never be certain that the program had no impact. The measuring instruments simply may be insensitive to changes that have occurred. On the other hand, the customers or workers already may be operating at a high level at the time when the pre-test is taken and hence there is little room for improvement in retest results. This is the so-called “Ceiling Effect.”


Share this:

About the Author

William BergquistWilliam Bergquist, Ph.D. An international coach and consultant in the fields of psychology, management and public administration, author of more than 50 books, and president of a psychology institute. Dr. Bergquist consults on and writes about personal, group, organizational and societal transitions and transformations. His published work ranges from the personal transitions of men and women in their 50s and the struggles of men and women in recovering from strokes to the experiences of freedom among the men and women of Eastern Europe following the collapse of the Soviet Union. In recent years, Bergquist has focused on the processes of organizational coaching. He is coauthor with Agnes Mura of coachbook, co-founder of the International Journal of Coaching in Organizations and co-founder of the International Consortium for Coaching in Organizations.

View all posts by William Bergquist

Leave a Reply