Saturday, September 19, 2009

Assignment two

Upon reading through the Student Services Program Description for Programming for Children with Severe Disabilities with an eye towards developing a plan for program evaluation, I found myself asking many questions. As with most programs there are so many opportunities for evaluation, several questions really should be asked before a model for evaluation is chosen. One of the primary questions being, “What do they want evaluated?” but others that immediately when through my head included: Are they summative or formative evaluation? Do they want to know if their funding was spent appropriately? Do they want to know if the children designated for this program were properly assessed? Are they wondering if the program was delivered according the criteria established? Are they wondering if the objectives set out by the Teacher were met for each child in the program? Interestingly, my own question based on this very procedural description of the program (number of visits, who visits, eligible criteria, etc) was whether or not the children in the program were having their needs met. I rapidly realized that the answer to this question would depend heavily on who was doing the asking since parents would be interested in different information than would the provincial funding department or, presumably, the Director of Special Education, for example. The purpose of the evaluation would also an important aspect to consider in choosing the model as some are better suited for make judgements and others for program improvement.

So, for the purposes of this assignment, I am going to make a few assumptions about the context within which this evaluation would take place. I am going to assume that this is a new program that Alberta Education is implementing and that they are at the planning stages and want to ensure that the program continually improves and evolves so that the children receive the best services possible so they are including program evaluation in the initial plan. Another assumption that I am making is that since evaluation is being included in the initial planning and throughout implementation, there is a modest budget included to allow for effective evaluation (though also noting that it is within a public education system so it likely wouldn’t be a robust budget).

Given this context, the model I would choose to use is Stufflebeam’s CIPP. The CIPP framework emphasises the collection of evaluative data the purpose of which is to help decision makers. It allows for formative evaluation to occur during implementation and for adjustments to be made accordingly so that processes are improved as well as summative evaluation to evaluate the product of the program. With information gathered in each of the four areas (context, input, process and product), this evaluation should be able to provide a broad based, comprehensive picture of the program as it is being implemented and, eventually of the outcomes (I assumed this was important since each program is individualized for a child’s specific needs). Such descriptive information would be important to the many stakeholders of a public education program and would also be useful for decision-makers. Understanding that catering formative evaluation strictly to decision makers’ questions and needs might garner criticism from other stakeholders in a public education program, I would develop a participative approach to the planning, including decision makers from various audiences (representatives from government funding, program developers, teachers and other education personnel delivering the program, and parents) in a focus group type of setting. This would have to be done very carefully to ensure that the group would not become so large that they would lose focus and to ensure that the plan for evaluation did not become too complex rendering the outcomes of the evaluation process useless.

I think that the cyclical nature of the, “process of delineating, obtaining and providing useful information to decision-makers, with the overall goal of programme or project improvement” (Robinson, 2002, p. 1) in periodic consultation with representatives of the major stakeholders, using the CIPP model to evaluate this programme would certainly provide for a stronger educational program for these children. Primarily because the program description does not outline to my satisfaction, what the purpose of the program is and this model begins with an evaluation of that context which would, in my opinion, be valuable since everything else stems from that.

Note that additional material, including the quote, comes from:
Robinson, Bernadette. The CIPP approach to evaluation. COLLIT project: A background note from Bernadette Robinson. 4 May 2002. Retrieved from the Commonwealth of Learning Discussion Area web site 2009-09-19. Http://hub.col.org/2002/collit/att-0073/01-The_CIPP_approach.doc

Stufflebeam, Daniel L. CIPP Evaluation Model Checklist: A tool for apply the Fifth Installment of the CIPP Model to assess long-term enterprises. June 2002. Retrieved from the Western Michigan University: The Evaluation Center web site 2009-09-19. http://www.wmich.edu/evalctr/checklists/cippchecklist.htm

Wednesday, September 16, 2009

Assignment 1

Wonderwise Women in Science Learning had developed activity kits designed for formal classroom use. The Nebraska State Museum and 4H wanted to adapt these kits for informal use, develop a few new ones, and disseminate all of these through 10 states. They very carefully made their plans for implementation, including this evaluation of the process. The purpose of the report is very clearly a summative evaluation of the effectiveness of the process of dissemination, not of the kits or program itself, but of the process that it took to spread the kits across the 10 states with high use by 4H clubs at the end of three years resulting in recommendations for other curriculum developers. The process was broken into five phases and with four specific goals (record of the process, documentation of different strategies between states, assess the effectiveness of each, make recommendations for others). The collaborative evaluators (the Nebraska State Museum and the Centre for Instructional Innovation) identified guiding questions and described the qualitative (telephone interviews) and quantitative (demographic surveys) methods of gathering information from various sources needed to make their evaluation of each phase carefully considering the scope and diversity of sites and situations involved in the program, comparing and assessing the effectiveness of different processes. This flexibility built into the process to allow differing states, people, and informal situations, to tailor the process to suit them turned out to be the most significant limiting factor in their evaluation but, ironically, was identified as a strength in the dissemination of the program – accentuating a couple of challenges in program evaluation (dealing with politics and balancing scientific soundness with practicality).

Like most things in the real world, this evaluation displays elements of more than one model of program evaluation. It is decidedly a formalized evaluation, purposefully prepared and carried out. The qualitative data gathered is not merely anecdotal, but descriptive. The first set of interviews was structured but the second was semi-structured and the timing was adjusted to suit the state and situation of dissemination in that location – this necessary adaptation of the evaluation process is descriptive data. All of this information was summarized for the purposes of assessing its effectiveness in the process, a judgement and producing a recommendation for adaptation for future dissemination processes. Although specific intentions were not stated in the report, descriptions of communication issues and differences in participant expectations indicate a congruence between what was intended and what was observed (both some failure in theory or planning as well as in consistent implementation of the plan created). Finally, the fact that differences in processes used from state to state were identified as successes and failures indicates a recognition that there is a connection between variables. All of these characteristics point towards the evaluators following a Stake-Countenance model of evaluation. The only exception is the expectation that the observations would be compared to a standard. I did not get the sense that the evaluators had a defined standard they were comparing their information to, merely that some processes worked better than others given the state and/or situation the program was in.

Although, I could easily make a case to say that these evaluators were following Provus-Discrepancy. The opening sentence of the Conclusions is, “It would be nice at this point to be able to offer a recipe for success from this project that could be applied to other similar dissemination projects.” It could be argued that the purpose of this evaluation to improve existing models of dissemination and establish better programs. The stated purpose within the report identifies several purposes and the audience the information is being gathered for: a record for the Principal Investigators of 4H; documentation of different strategies for 4H leaders; evaluative information for the funding sponsors; and recommendations for informal educators, all of which sounds much like accountability and efforts to help administrators make wise decisions. The report is a description, a summary of differences, and an assessment laid out step-by-step.
If I had to choose, I would say that Frerichs and Spiegel agree with Provus’ Discrepancy model.

Frerichs, S. W., Spiegel, A. N. Dissemination of the Wonderwise 4H Project: An Evaluation of the Processretrieved from: http://www.hfrp.org/out-of-school-time/ost-database-bibliography Thursday September 10th, 2009.

Tuesday, September 15, 2009

A starting point

For the purposes of this assignment and my own learning, I thought I would try to find what I thought might be an exemplary evaluation of a program so that I could see ‘how it could be done’. In my search I stumbled upon a resource that I thought I should share. The Harvard Graduate School of Education has a project called the Harvard Family Research Project (http://www.hfrp.org/). where they help, “stakeholders develop and evaluate strategies to promote the wellbeing of children, youth, families and their communities.” They have developed a searchable Out-of-School Time Program Research and Evaluation Database and Bibliography. As an out-of-school time educator and masters student studying the same, there is a wealth of information. It is from this database that I found an exemplary evaluation that I was interested in.