Wednesday, September 16, 2009

Assignment 1

Wonderwise Women in Science Learning had developed activity kits designed for formal classroom use. The Nebraska State Museum and 4H wanted to adapt these kits for informal use, develop a few new ones, and disseminate all of these through 10 states. They very carefully made their plans for implementation, including this evaluation of the process. The purpose of the report is very clearly a summative evaluation of the effectiveness of the process of dissemination, not of the kits or program itself, but of the process that it took to spread the kits across the 10 states with high use by 4H clubs at the end of three years resulting in recommendations for other curriculum developers. The process was broken into five phases and with four specific goals (record of the process, documentation of different strategies between states, assess the effectiveness of each, make recommendations for others). The collaborative evaluators (the Nebraska State Museum and the Centre for Instructional Innovation) identified guiding questions and described the qualitative (telephone interviews) and quantitative (demographic surveys) methods of gathering information from various sources needed to make their evaluation of each phase carefully considering the scope and diversity of sites and situations involved in the program, comparing and assessing the effectiveness of different processes. This flexibility built into the process to allow differing states, people, and informal situations, to tailor the process to suit them turned out to be the most significant limiting factor in their evaluation but, ironically, was identified as a strength in the dissemination of the program – accentuating a couple of challenges in program evaluation (dealing with politics and balancing scientific soundness with practicality).

Like most things in the real world, this evaluation displays elements of more than one model of program evaluation. It is decidedly a formalized evaluation, purposefully prepared and carried out. The qualitative data gathered is not merely anecdotal, but descriptive. The first set of interviews was structured but the second was semi-structured and the timing was adjusted to suit the state and situation of dissemination in that location – this necessary adaptation of the evaluation process is descriptive data. All of this information was summarized for the purposes of assessing its effectiveness in the process, a judgement and producing a recommendation for adaptation for future dissemination processes. Although specific intentions were not stated in the report, descriptions of communication issues and differences in participant expectations indicate a congruence between what was intended and what was observed (both some failure in theory or planning as well as in consistent implementation of the plan created). Finally, the fact that differences in processes used from state to state were identified as successes and failures indicates a recognition that there is a connection between variables. All of these characteristics point towards the evaluators following a Stake-Countenance model of evaluation. The only exception is the expectation that the observations would be compared to a standard. I did not get the sense that the evaluators had a defined standard they were comparing their information to, merely that some processes worked better than others given the state and/or situation the program was in.

Although, I could easily make a case to say that these evaluators were following Provus-Discrepancy. The opening sentence of the Conclusions is, “It would be nice at this point to be able to offer a recipe for success from this project that could be applied to other similar dissemination projects.” It could be argued that the purpose of this evaluation to improve existing models of dissemination and establish better programs. The stated purpose within the report identifies several purposes and the audience the information is being gathered for: a record for the Principal Investigators of 4H; documentation of different strategies for 4H leaders; evaluative information for the funding sponsors; and recommendations for informal educators, all of which sounds much like accountability and efforts to help administrators make wise decisions. The report is a description, a summary of differences, and an assessment laid out step-by-step.
If I had to choose, I would say that Frerichs and Spiegel agree with Provus’ Discrepancy model.

Frerichs, S. W., Spiegel, A. N. Dissemination of the Wonderwise 4H Project: An Evaluation of the Processretrieved from: http://www.hfrp.org/out-of-school-time/ost-database-bibliography Thursday September 10th, 2009.

1 comment:

  1. Tracy you have chosen an interesting wide-ranging program to evaluate. Although intended to show the success of the program, you have highlighted the fact that there seemed to be a lot of confusion around how the evaluation was conducted. I think that you are right on to point out that the lack of consistency will impact the opportunity for the results to be compared and shared. It feels like the evaluation was intended for too many audiences and may not have achieved the level of effectiveness that was hoped for.

    ReplyDelete