Program design often begins with a needs assessment. The needs assessment helps to shape what the program requires to address the problems it will address. In this post, we will look at how needs assessments are often developed within the context of program evaluation and the various levels of needs that one may encounter.
Process of Needs Assessment
In program evaluation, needs assessment has three phases: preassessment, assessment, and postassessment. Preassessment determines the problem’s current status and the assets available to address it. Some common questions that a preassessment may address are resolving the issue, identifying who is affected by the problem and/or the lack of resources, and determining what has been done in the past to address this situation. The sources of data for this include historical data and interviews.
ad
The assessment phase involves collecting new information on the organization’s needs and assets. Whereas the preassessment looks at the past, the assessment looks at the present situation. The evaluation also addresses the same questions as the preassessment. Since there is so much overlap between the preassessment and the evaluation, it is common to skip this step and move directly to the postassessment.
The postassessment phase involves using the information that was gathered from the first two phases to develop appropriate interventions. For example, if a needs assessment finds a lack of resources for improving reading, an appropriate intervention may be the development of a reading lab. Naturally, the creation of a reading lab would necessitate the need for funding, such as from a grant
Levels of Need
Another aspect of a needs assessment is determining the level of need. In this context, need refers to who is receiving and giving services. A primary-level need is used to identify service recipients. For example, the students who use a reading lab would be at the primary level. Primary-level individuals need the program’s services.
Secondary needs level involves the individual who provides the services of a program. An example of individuals at the secondary need level would be teachers who are supporting the reading lab. Secondary level individuals may need training, support, and or the actual materials to make the program come to life.
Tertiary needs level is the actual support secondary needs level individuals use to make the program happen. As already mentioned, this can include training, materials, and/or support. An example would be training teachers to use the reading lab and making the software readily available for teachers.
Conclusion
A needs assessment is often necessary when developing programs, especially large ones. This crucial step provides clarity about what needs to be developed. With these tools, program administrators can be sure that they are taking a scientific approach to supporting program participants.
Program implementation examines how a program is put into practice. The focus of any program is to bring change to whoever the stakeholders of the program are. Therefore, how the program is put into practice or implemented plays a critical role in whether the program is successful.
Components of Implementation
Joseph Durlak describes eight components of program implementation as shown below.
Fidelity
Dosage
Quality
Participant engagement
Program differentiation
Monitoring of controlled conditions
Program reach
Most of these components are self-explanatory. Fidelity is the level of faithfulness implementors of the program have to the procedures and or protocols of the program. Many programs have an experimental nature in which the participants of the program are compared either to themselves as a “before” group or to a control group that does not experience the program. To confirm that the program is the reason for any difference it must be confirmable that the procedures of the program are adhered to.
ad
The same idea applies to dosage which is the amount of the program that is experienced. This value must be consistent to establish any differences between groups. Dosage can be measured in terms of the amount, length of time, number of occurrences, etc. the program requires.
Adaptions are the modifications that are made to the program for various reasons. Sometimes the original procedures of the program are not practical during implementation. For example, a program may expect participants to receive counseling twice a week for 30 minutes each time for a total of an hour. During implementation, it may be found that the participants were not able to come twice a week. Therefore, instead of meeting twice a week, the program is adapted to meet once a week for one hour. It is critical to keep track of adaptions as they can cause a program to lose its focus and original purpose.
Participant engagement is how involved and cooperative the participants in the program are. Low engagement is often a sign that a program is failing. If this does happen it may be necessary to make adaptations to the program.
Program differentiation is the awareness of how the current program is different from other programs. Knowing what makes a program different is critical in showing how it is superior to other interventions that have been tried. Understanding differences also is an indication for determining what works and does not work in terms of helping participants.
Monitoring of controlled conditions is focused on the controlled variables that need to be monitored when using an experimental and controlled group with programs. Lastly, program reach is a measured of how much of the target population is involved with the program.
It is critical to be aware of these components of implementation as they help evaluators determine the level of success a program has had. It is also important to make sure that the individuals who are actually implementing the program are trained and supported throughout the entire implementation process. If the implementors do not know what to do are feel abandoned then implementation will also suffer.
Factors of Implementation
Components of implementation are aspects of the program that are within the program. Factors of implementation are variables outside of the program that influence it. According to Joseph Durlak, there are also several factors to be aware of when it comes to implementation. Some of the factors include the following
Community level
Traits of implementors
Program traits
organizational factor
Processes
Staffing
Professional development
The community level factor relates to traits of the community surrounding the program and can include the policies, politics, and the level of funding for a program. A negative political environment can seriously hamper cooperation for example.
The implementers’ traits can include their skill level, confidence, sense of relevancy, and more. We have already discussed implementors earlier but if the implementors lack the skill even the best programs will fail.
Program traits include how well the program fits with the school and or the adaptability of the program. Sometimes a great program is a poor culture fit and or is too rigid for the local context. An example would be the example used earlier for dosage. Twice-a-week counseling may not be appropriate for the context.
Organizational factors include the climate, openness, integration, etc., of the local organization that is supporting the program. A closed-off organization will probably not support any program no matter the benefits.
Processes include decision-making, communication, planning, etc. Programs require local stakeholders to make decisions about cooperation and other factors related to planning and implementation. If there is a bottleneck or resistance to developing processes the program may never get off the ground.
Staffing is about leadership and how they support the program. Enthusiastic leaders may provide adequate support for a program while indifferent leaders may cause a program to fail. One reason for this is the control over resources and morale that leaders possess.
Professional development has already been alluded to and it is the amount of support and training that implementers of a program need. It is of critical importance that the individuals who bring a program to life through implementation receive the support and training they need in order to ensure success. If the implementors are confused over what to do the program has little hope for success.
Conclusion
Program implementation is often overlooked. People are so excited to begin a new program to help people that they often forget to assess the implementation of it. Doing this can lead to good programs being labeled as failures, leading to finger-pointing. Focusing on the implementation can help to alleviate this common occurrence.
Within the context of program evaluation, different schools of thought or paradigms affect how evaluators do evaluation. In this post, we will look specifically at the postpositivist paradigm.
Postpositivism
The postpositivist paradigm grew out of the positivist paradigm. Both paradigms believe in using the scientific method to uncover laws of human behavior. There is also a focus on experiments whether they are true or quasi with the use of surveys and or observation. However, postpositivism will also take a mixed method (combining quantitative with qualitative) approach when it makes sense.
ad
The main differences between positivism and postpositivism are the level of certainty and their contrasting positions on metaphysics. Positivists focus on absolute certainty of results while postpositivists are more focused on the probability of certainty. In addition, Positivists believe in one objective reality that is independent of the distant observer while postpositivists tend to have a more nuanced view of reality.
The typical academic research article follows the positivist/postpositivist paradigm. Such an article will contain a problem, purpose, hypotheses, methods, results, and conclusion. This structure is not unique to postpositivism, but it is important to note how ubiquitous this format is. The example above is primarily for quantitative research, but qualitative and mixed methods follow this format more loosely.
Within evaluation, postpositivism has influenced theory-based evaluation and program theory. Theory-based evaluation is focused on theories or ideas about what makes a great program, which are realized in the traits and tools used in the program.
Program theory is a closely related idea focused on the elements needed for achieving results and showing how these elements relate to each other. The natural outgrowth of this is the logic model which identifies what is needed (inputs) for the program, what will be done with these resources (output), and what is the impact of the use of these resources among stakeholders (outcomes). The logic model is the bedrock of program evaluation in many contexts such as within the government.
The reason for the success of the logic model is how incredibly structured and clear it is. Anybody can understand the results even if they may not be useful. In addition, the logic model was developed earlier than other approaches to program evaluation and it may be popular because it’s one of the first approaches most students learn in graduate school.
The emphasis on theory with postpositivism can often be at the expense of what is taking place in the actual world. While the use of theory is critical for grounding a study scientifically this can be alienating to the stakeholders who are tasked with using the results of a postpositivist program evaluation. As such, other schools of thought have looked to address this.
Conclusion
Postpositivism is one of many ways to view program evaluation. The steps are highly clear and sequential, and generally, everybody knows what to do. However, the appearance of clarity does not imply that it exists. Other paradigms have challenged the usefulness of the results of a program evaluation inspired by postpositivism.
Program evaluation plays a critical role in assessing program performance. However, as with most disciplines of knowledge, there are different views or paradigms for how to assess a program.
The word paradigm, in this context, means a collection of assumptions or beliefs that shape an individual’s worldview. For example, creationists have assumptions about how life came to be that are different from those of people who believe in evolution. Just as paradigms influence science, they also play a role in how evaluators view the structure and purpose of program evaluation.
In this post, we will briefly go over four schools of thought or paradigms of program evaluation, along with a description of each and how they approach program evaluation. These four paradigms are
Postpositive
Pragmatic
Constructivist
Transformative
Postpositive
The postpositive paradigm grew out of the positive paradigm. Both paradigms are focused on the use of the scientific method to investigate a phenomenon. They also both support the idea of a single reality that is observable. However, postpositivists believe in a level of probability that accounts for human behavior. This assumption may have given rise to statistics which focuses heavily on probability.
Postpositivism is heavily focused on methods that involve quantitative data. Therefore, any program evaluator who is eager to gather numerical data is probably highly supportive of postpositivism.
Pragmatic
A pragmatic paradigm is one in which there is a strong emphasis on the actual use of the results. A pragmatist wants to collect data that they are sure will be used to make a difference in the program. In terms of data and methods, anything goes as long as it leads to implementation.
Since pragmatism is so flexible it is supportive of mixed methods which can include quantitative or qualitative data. While a postpositivist might be happy once the report is completed, a pragmatist is only happy if their research is used by stakeholders.
Constructivist
The constructivist paradigm is focused on how people create knowledge. Therefore, constructivists are focused on the values of people because values shape ideas and the construction of knowledge. As such, constructivists want to use methods that focus on the interaction of people.
With the focus on people, constructivists want to create a story using narrative approaches that are often associated with qualitative methods. It is possible but unusual to use quantitative methods with constructivists because such an approach does help to identify what makes a person tick in the same way as an interview would.
Transformative
The transformative paradigm is focused on social justice. Therefore, adherents to this paradigm want to bring about social change. This approach constantly investigates injustice and oppression. The world and the system need to be radically changed for the benefit of those who are oppressed.
People who support the transformative paradigm are focused on the viewpoints of others and the development of more rights for minority groups. When the transformative paradigm is the view of a program evaluation the evaluators will look for inequity, inequality, and injustice. Generally, with this approach, the outcome is already determined in that there is some sort of oppression and injustice that is happening, and the purpose of the evaluation is to determine where it is so that it can be stamped out.
Conclusion
The paradigm that someone adheres to has a powerful influence on how they would approach program evaluation. The point is not to say that one approach is better than the other. Instead, the point is that being aware of the various positions can help people to better understand those around them.
Whenever a program is implemented there are always ways for things to go wrong. Treatment fidelity is a term used to describe how programs are not implemented as intended in the grant proposal. Below is a list of common ways that treatment fidelity can become a problem
Adherence to implementation
Implementation incompetence
Variations in treatment
Program drift
We will look at each of these below
Adherence to Implementation
Implementation adherence is whether the provider of the program follows the intended procedures. For example, if we have a reading lab program to boost students’ reading comprehension. The procedures may be as follows.
Fifth-grade students are to use the reading lab on Monday, Wednesday, and Friday for 30 minutes each. (Dosage)
The students must be engaged actively in using the reading software
If the provider wanders from these procedures it can quickly become an implementation issue. This is common. A teacher may take their kids on a field trip, there could be holidays, the teacher might do 1 hour one day and skip another day, etc. In other words, providers agree to a program but essentially do what they want when necessary. Every time these modifications happen it impacts the quality of the results as other factors are introduced into the study that were not originally planned for.
Implementation Competence
Implementation competence is defined as the provider’s ability to follow directions. If the procedures are too complicated the provider may not be able to follow them for the benefit of the students in the program.
ad
An example would be if a provider is not comfortable with using computers and the reading software they may not be able to help students who are having technical issues. If too many students are unable to use the computers because the provider or teacher cannot help them this could lead to implementation competence concerns.
Difference in Treatment
The difference in treatment means that the treatment that the participants in the program receive should not be the same as participants who are not in the program. The treatments must be different so that comparisons can be made.
Sometimes when a new program is implemented providers will want all students to experience it. In our reading lab example, the procedures might call for allowing only half of the fifth-graders below grade level in reading comprehension. However, a teacher might decide to have all students participate in the reading lab because of the obvious benefits. If this happens, there is no way to compare the results of those who participate and those who do not.
Such well-meaning actions may benefit the students but damage the scientific process. It is always critical that there are differences in treatment so that it can be determined if the treatment makes a difference.
Program Drift
Program drift is the gradual weakening of the implementation of a program. People naturally lose discipline over time and this can apply to obeying the procedures of a program. For example, a provider might vigilantly follow the procedures of the reading lab in the beginning but may slowly allow more or less time for the students.
Program drift is hard to notice. One way to prevent it is to constantly re-train providers so that they are reminded about how to implement the program. Retraining is beneficial when providers want to implement the program correctly.
Conclusion
Treatment fidelity is critical to determine the quality and influence of a program. Evaluators need to be familiar with these common threats to fidelity so that they can provide the needed support to help providers.