Planning and Carrying CIPP Model of Evaluation
This paper’s concluding part is keyed to the appended CIPP Evaluation Model Checklist. That checklist is designed to help evaluators and their clients’ plan, conduct, and assess evaluations based on the requirements of the CIPP Model and the Joint Committee (1994) Program Evaluation Standards. While the checklist is self-explanatory and can stand alone in evaluation planning efforts, the following discussion is intended to encourage and support the use of the checklist.
The checklist is comprehensive in providing guidance for thoroughly evaluating long term, ongoing programs. However, users can apply the checklist flexibly and use those parts that fit the needs of particular evaluations. Also, the checklist provides guidance for both formative and summative evaluations.
An important feature is the inclusion of checkpoints for both evaluators and clients/stakeholders. For each of the 10 evaluation components, the checklist provides checkpoints on the left for evaluators and corresponding checkpoints on the right for evaluation clients and other users. The checklist thus delineates in some detail what clients and evaluators need to do individually and together to make an evaluation succeed.
Concepts Underlying the Checklist
As seen in this paper’s first two parts, the definition of evaluation underlying this checklist is that evaluations should assess and report an entity’s merit, worth, probity, and/or significance and also present lessons learned. Moreover, CIPP evaluations and applications of this checklist should meet the Joint Committee (1994) standards of utility, feasibility, propriety, and accuracy. The checklist’s contents are configured according to the theme that evaluation’s most important purpose is not to prove but to improve. Also, as described previously in this paper the recommended evaluation approach is values-based and objective in its orientation.
Contractual Agreements
The checklist’s first section identifies essential agreements in negotiating an evaluation contract (or memorandum of agreement). These provide both parties assurances that the evaluation will yield timely, responsive, valid reports and be beyond reproach; necessary cooperation of the client group will be provided; roles of all evaluation participants will be clear; budgetary agreements will be appropriate and clear; and the evaluation agreements will be subject to modification as needed.
CIPP Components
The checklist’s next seven sections provide guidance for designing context, input, process, impact, effectiveness, sustainability, and transportability evaluations. Recall that the impact, effectiveness, sustainability, and transportability evaluations are subparts of product evaluation. Experience has shown that such a breakout of product evaluation is important in multiyear evaluations of large scale, long-term programs.
The seven CIPP components may be employed selectively and in different sequences and often simultaneously depending on the needs of particular evaluations. Especially, evaluators should take into account any sound evaluation information the clients/stakeholders already have or can get from other sources. As stressed in Part I of this paper, CIPP evaluations should complement rather than supplant other defensible evaluations of a program or other entity.
Formative Evaluation Reports
Ongoing, formative reporting checkpoints are embedded in each of the CIPP components. These are provided to assist groups to plan, carrying out, institutionalizing, and/or disseminating effective services to targeted beneficiaries. Timely communication of relevant, valid evaluation findings to the client and right-to-know audiences is essential in sound evaluations. As needed, findings from the different evaluation components should be drawn together and reported periodically, typically once or twice a year, but more often if needed.
The general process, for each reporting occasion, calls for draft reports to be sent to designated stakeholders about 10 working days prior to a feedback session. At the session, the evaluator may use visual aids, e.g., a PowerPoint presentation, to brief the client, staff, and other members of the audience. It is a good idea to provide the client with a copy of the visual aids, so subsequently he or she can brief board members or other stakeholder groups on the most recent evaluation findings.
Those present at the feedback session should be invited to raise questions, discuss the findings, and apply them as they choose. At the session’s end, the evaluator should summarize the evaluation’s planned next steps and future reports; arrange for needed assistance from the client group, especially in data collection; and inquire whether any changes in the data collection and reporting plans and schedule would make future evaluation services more credible and useful.
Following the feedback session, the evaluators should finalize the evaluation reports, revise the evaluation plan and schedule as appropriate, and transmit to the client and other designated recipients the finalized reports and any revised evaluation plan and schedule.
Meta-evaluation
The checklist’s next to the last section provides details for both formative and summative meta evaluation. Meta-evaluation is to be done throughout the evaluation process. Evaluators should regularly assess their own work against appropriate standards as a means of quality assurance. They should also encourage and cooperate with independent assessments of their
work. Typically, the client or a third party should commission and fund the independent metaevaluation. At the end of the evaluation, evaluators are advised to give their attestation of the extent to which applicable professional standards were met.
The Summative Evaluation Report
The checklist concludes with detailed steps for producing a summative evaluation report. This is a synthesis of all the findings to inform the full range of audiences about what was attempted, done, and accomplished; the bottom-line assessment of the program; and what lessons were learned.
Reporting summative evaluation findings is challenging. A lot of information has to be compiled and communicated effectively. The different audiences likely will have varying degrees of interest and tolerance for long reports. The evaluator should carefully assess the interests and needs of the different audiences and design the final report to help each audience get directly to the information of interest. This checklist recommends that the final report actually be a compilation of three distinct reports.
The first, program antecedents report, should inform those not previously acquainted with the program about the sponsoring organization, how and why the program was started, and the environment where it was conducted.
The second, the program implementation report should give accurate details of the program to groups that might want to carry out a similar program. Key parts of this report should include descriptions of the program’s beneficiaries, goals, procedures, budget, staff, facilities, etc. This report essentially should be objective and descriptive. While it is appropriate to identify important program deficiencies, judgments mainly should be reserved for the program results report.
The third, the program results report, should address questions of interest to all members of the audience. It should summarize the employed evaluation design and procedures. It should then inform all members of the audience about the program’s context, input, process, impact, effectiveness, sustainability, and transportability. It should present conclusions on the program’s merit, worth, probity, and significance. It should lay out the key lessons learned.
The summative evaluation checkpoint further suggests that, when appropriate, each of the three subreports end with photographs that retell the subreport’s account. These can enhance the reader’s interest, highlight the most important points, and make the narrative more convincing. A set of photographs (or charts) at the end of each subreport also helps make the overall report seem more approachable than a single, long presentation of the narrative.
This final checkpoint also suggests interspersing direct quotations from stakeholders to help capture the reader’s interest, providing an executive summary for use in policy briefing sessions, and issuing an appendix of evaluation materials to document and establish credibility for the employed evaluation procedures.
Summation
The CIPP Model treats evaluation as an essential concomitant of improvement and accountability within a framework of appropriate values and a quest for clear, unambiguous answers. It responds to the reality that evaluations of innovative, evolving efforts typically cannot employ controlled, randomized experiments or work from published evaluation instruments—both of which yield far too little information anyway. The CIPP Model is configured to enable and guide comprehensive, systematic examination of efforts that occur in the dynamic, septic conditions of the real world, not the controlled conditions of experimental psychology and split-plot crop studies in agriculture.
The model sees evaluation as essential to society’s progress and well-being. It contends that social groups cannot make their programs, services, and products better unless they learn where they are weak and strong. Developers and service providers cannot be sure their goals are worthy unless they validate the goals’ consistency with sound values and responsiveness to beneficiaries’ needs! plan effectively and invest their time and resources wisely if they don’t identify and assess options! earn continued respect and support if they cannot show that they have responsibly carried out their plans and produced beneficial results ! build on past experiences if they don’t preserve, study, and act upon lessons from failed and successful efforts! convince consumers to buy or support their services and products unless their claims for these services are valid and honestly reported
Institutional personnel cannot meet all of their evaluation needs if they don’t both contract for external evaluations and also build and apply capacity to conduct internal evaluations. Evaluators cannot defend their evaluative conclusions unless they key them to sound information and clear, defensible values. Moreover, internal and external evaluators cannot maintain credibility for their evaluations if they do not subject them to meta evaluations against appropriate standards.
The CIPP Model employs multiple methods, is based on a wide range of applications, is keyed to professional standards for evaluations, is supported by an extensive literature, and is buttressed by practical procedures, including a set of evaluation checklists and particularly the CIPP Evaluation Model Checklist appended to this paper. It cannot be overemphasized, however, that the model is and must be subject to continuous assessment and further development.
References
Stufflebeam, D. L., Foley, W. J., Gephart, W. J., Guba, E. G., Hammond, R. L., Merriman, H. O., & Provus, M. M. (1971). Educational evaluation and decision making. Itasca, IL: Peacock.
Stufflebeam, D. L., Gullickson, A. R., & Wingate, L. A. (2002). The spirit of Consuelo: An evaluation of Ke Aka Ho‘ona. Kalamazoo: Western Michigan University Evaluation Center.
Stufflebeam, D. L., Jaeger, R. M., & Scriven, M. (1992, April 21). A retrospective analysis of a summative evaluation of NAGB’s pilot project to set achievement levels on the national assessment of educational progress. Chair and presenter at the annual meeting of the American Educational Research Association, San Francisco.
Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (2000). Evaluation models: Viewpoints on educational and human services evaluation. Boston: Kluwer.
Stufflebeam, D. L., & Millman, J. (1995, December). A proposed model for superintendent evaluation. Journal of Personnel Evaluation in Education, 9(4), 383-410. Stufflebeam, D. L., & Nevo, D. (1976, Winter). The availability and importance of evaluation information within the school. Studies in Educational Evaluation, 2, 203-9. Stufflebeam, D. L., & Webster, W. J. (1988). Evaluation as an administrative function. In N. Boyan (Ed.), Handbook of research on educational administration (pp. 569-601). White Plains, NY: Longman.
Tyler, R. W. (1942). General statement on evaluation. Journal of Educational Research, 36, 492- 501.
- S. General Accounting Office. (2003). Government auditing standards (The yellow book). Washington DC: Author.
- S. Office of Education. (1966). Report of the first year of Title I of the Elementary and Secondary Education Act. Washington, DC: General Accounting Office.
Webster, W. J. (1975, March). The organization and functions of research and evaluation in large urban school districts. Paper presented at the annual meeting of the American Educational Research Association, Washington, DC.