Designing Monitoring and Evaluation Program

Designing Monitoring and Evaluation ProgramMost evaluations are concerned with the issue of program design. Design issue refers to the factors affecting evaluation results. These factors appear during program implementation.

A good design guides the implementation process to facilitate monitoring of implementation and provides a firm basis for performance evaluation.

Some key questions related to design are;

  1. Are input and strategies realistic, appropriate, and adequate to achieve the results?
  2. Are output, outcomes, and impact clearly stated, describing solutions to identified problems and needs?
  3. Are indicators direct, objective, practical, and adequate (DOPA)? Is the responsibility for tracking them identified?
  4. Have the external factors and risk factors to the program that could affect implementation been identified and have the assumptions about such risk factors been validated?
  5. Have the execution, implementation, and evaluation responsibilities been identified?
  6. Does the program design address the prevailing gender situation? Are the expected gender-related changes adequately described in the outputs? Are the identified gender indicators adequate?
  7. Does the program include strategies to promote national capacity building?

Proper monitoring and evaluation design during project preparation is a much broader exercise than just the development of indicators.

Good design has five components, viz.

  1. Clear statements of measurable objectives for the project and its components, for which indicators can be defined;
  2. A structured set of indicators, covering outputs of goods and services generated by the project and their impact on beneficiaries;
  3. Provisions for collecting data and managing project records so that the necessary data for indicators are comparable with existing statistics, and are available at a reasonable cost;
  4. Institutional arrangements for gathering, analyzing and reporting project data, and for investing in capacity building to sustain the monitoring and evaluation services;
  5. Proposals for how the monitoring and evaluation will be fed back into decision-making.

As there are differences between the design of a monitoring system and that of an evaluation process, we deal with them separately.

Under monitoring, we look at the process an organization could go through to design a monitoring system, and under evaluation, we look at;

  1. purpose of evaluation,
  2. key evaluation questions, and
  3. methodology.

Designing a Monitoring System

In a monitoring system for an organization or a project, the following steps are suggested to consider:

  • Organize an initial workshop with staff, facilitated by consultants;
  • Generate a list of indicators for each of the three aspects: efficiency, effectiveness, and impact;
  • Clarify what variables you are interested in to gather data on these variables;
  • Decide the method of collecting the data you need. Also, decide how you are going to manage data;
  • Decide how often you will analyze data;
  • Analyze data and report.
  • Designing an Evaluation System

Designing an evaluation process means being able to develop Terms of Reference (TOR) for such a process so that timely evaluation information is available to inform decision making and ensure that program implementation can demonstrate accountability to its stakeholders.

Evaluation results are important for making adjustments in the ongoing program, or to design a new program cycle.

Careful designing of evaluation and periodic updating of evaluation plans also facilitates their management and contributes to the quality of evaluation results.

In planning and designing an evaluation study, the following issues are usually addressed:

  • The purpose of evaluation, including who will use the evaluation findings;
  • The main objectives of the evaluation and the questions it should address;
  • The sources of data and the methods to be followed in collecting and analyzing data;
  • The persons to be involved in evaluation exercise;
  • The timing of each evaluation;
  • The budget requirement.

We provide a brief overview of the above aspects.

Purpose

The purpose of an evaluation is the reason why we are doing this job. It goes beyond what we want to know why we want to know it.

For example, an evaluation purpose may be: “To assess whether the project under evaluation has its planned impact to decide whether or not to replicate the model elsewhere,” or “To assess the program in terms of its effectiveness, impact on the target group, efficiency, and sustainability to improve its functioning.”

Evaluation questions

The key evaluation questions are the central questions we want the evaluation process to answer. One can seldom answer “yes” or “no” to them.

A useful evaluation question should be thought-provoking, challenging, focused, and capable of raising additional questions.

Here are some examples of key evaluation questions related to an ongoing project:

  • Who is currently being benefited from the project, and in what way?
  • Do the inputs (money and time) justify the outputs? If so or if not, on what basis is this claim justified?
  • What other strategies could improve the efficiency, effectiveness, and impact of the current project?
  • What are the lessons that can be learned from the experiences of this project in terms of replicability?

Methodology

The methodology section of the ‘terms of reference’ should provide a broad framework for how the project wants the work of the evaluation done.

Both primary and secondary data sources may be employed to obtain evaluation information.

The primary sources may include such sources as a survey, key informants, and focus group discussions. The secondary sources may be, among others, published reports, datasheets, minutes of the meetings, and the like.

This section might also include some indication of reporting formats:

  • Will all reporting be written?
  • To whom will the reporting be made?
  • Will there be an interim report? Or only a final report?
  • What sort of evidence does the project require to back up the evaluator’s opinion?
  • Who will be involved in the analysis?

Information collection

This section is concerned with two aspects;

  1. baselines and damage control, and
  2. Methods.

By damage control, we mean what one would need to do if he/she failed to get baseline information when the evaluation started.

The collection of baseline information may involve general information about the situation.

Official statistics often serve this purpose. If not, you may need to secure these data by conducting a comprehensive field survey. In doing so, be certain that you collect that information, which will focus on the indicators of impact.

Suppose you decide to use life expectancy as an indicator of the mortality condition of a country.

Several variables might have an impact on life expectancies, such as gender, socioeconomic conditions, religion, environment, sanitation, and the like.

The right choice of the variables will enable you to measure the impact and effectiveness of the programs, thereby improving their efficiency.

Unfortunately, it is not always possible to get the baseline information as desired after you have begun work because the situation over time might have changed, and you failed to collect this information at the beginning of the project.

You may not even have decided what your important indicators were when the project began.

This is what we refer to as damage control.

However, you may get anecdotal information from those who were involved at the beginning, and you can ask participants retrospectively if they remember what the situation was when the project began.

Sometimes the use of control groups is a neat solution to this problem. Control groups are groups of people that have not had input from your project but are very similar to those whom you are working with.

Monitoring and evaluation processes can extensively use random sampling procedures. This ranges from very simple sampling to highly complex sampling.

The usually employed sampling methods are simple random sampling, stratified sampling, systematic sampling, cluster sampling, and multi-stage sampling.

Methods of data collection may also vary widely depending on the purpose and objectives of your program.

Analyzing Information

A vital component of both monitoring and evaluation is information analysis. The analysis is the process of turning the detailed information into an understanding of patterns, trends, and interpretations.

Once you have analyzed the information, the next step is to write up your analysis of the findings in the form of a report as a basis for reaching conclusions and making recommendations.

The following approaches are assumed to be important in the analysis terminating in a report form:

  • Determine the key indicators for the monitoring/evaluation;
  • Collect information around the indicators;
  • Develop a structure for your analysis based on your intuitive understanding of emerging themes and concerns;
  • Go through your data, organize it under the perceived themes and concerns;
  • Identify patterns, trends, and possible interpretations;
  • Write up your findings, conclusions, and recommendations (if any).
Related Posts ⁄