While monitoring and evaluation are two distinct elements, they are decidedly geared towards learning from what you are doing and how you are doing it by focusing on many essential and common objectives:
- Alternative strategy
These elements may be referred to as core objectives of monitoring and evaluation.
A systematic diagram is presented in Figure to illustrate the elements of monitoring and evaluation that operate in actual settings.
We present a brief overview of these elements.
The term relevance refers to whether the program examines the appropriateness of results to the national needs and priorities of target groups. Some critical questions related to the relevance include:
- Do the program results address the national needs?
- Are they in conformity with the program’s priorities and policies?
- Should the results be adjusted, eliminated, or new ones are added in the light of new needs, priorities, and policies?
Efficiency tells you whether the input into the work is appropriate in terms of the output. It assesses the results obtained with the expenditure incurred and the resources used by the program during a given time.
The analysis focuses on the relationship between the quantity, quality, and timeliness of inputs, including personnel, consultants, travel, training, equipment, and miscellaneous costs, and the quantity, quality, and timeliness of the outputs produced and delivered.
It ascertains whether there was adequate justification for the expenditure incurred and examines whether the resources were spent as economically as possible.
Effectiveness is a measure of the extent to which a project (or development program) achieves its specific objectives.
If not, the evaluation will identify whether the results should be modified (in the case of a mid-term evaluation) or the program be extended (in the case of a final evaluation) to achieve the stated results.
If, for example, we conducted an intervention study to improve the qualifications of all high school teachers in a particular area, did we succeed?
Impact tells you whether or not your action made a difference to the problem situation you were trying to address.
In other words, was your strategy useful?
Referring to the above example, we ask: did ensure that teachers were better qualified, improved the results in the final examination of the schools?
Sustainability refers to the durability of program results after the termination of the technical cooperation channeled through the program. Some likely questions raised on this issue are:
- How likely is that the program achievements will be sustained after the withdrawal of external support?
- Do we expect that the involved counterparts are willing and able to continue the program activities on their own?
- Have program activities been integrated into current practices of counterpart institutions and/or the target population?
An assessment of causality examines the factors that have affected the program results. Some key questions related to causality, among others, are:
- What particular factors or events have modified the program results?
- Are these factors internal or external to the program?
Program evaluation may find significant unforeseen positive or negative results of program activities.
Once identified, appropriate action can be taken to enhance or mitigate them for a more significant overall impact. Some questions related to unanticipated results often raised are:
- Were there any unexpected positive or negative results of the program?
- If so, how to address them? Can they be either enhanced or mitigated to achieve the desired output?