Advanced Monitoring and Evaluation-south-africa

Advanced Monitoring and Evaluation

Monitoring is vital for informed decision-making (evidence-based decision-making) as it helps in gauging implementation progress against what was planned; aligning actual implementation, budgets, quality and scope with what is in plans. Monitoring entails the collection and analysis of data relating to what is being implemented largely pertaining to progress or lack thereof, quality, scope and budgetary issues. Evaluation is an equally important information generating tool for informed decision-making. Evaluation can be carried out prior to implementation to give an insight into the status quo (unravelling the nuanced reality obtaining which intervention seeks to alter) and ex-post project implementation to gauge the effect of the project on those acted upon and to draw lessons to inform future implementation. In essence, M&E help us gauge the value we are generating from the efforts we put into the various programmes and projects we finance through careful tracking and evaluation of the outcomes and impacts of our projects and programmes. Results based approaches help attempt answering questions such as:

  • What key outcomes and impacts were formulated around a project or programme?
  • Have these impacts and outcomes been achieved?
  • Are these outcomes and impacts sustainable and replicable in other contexts?

Course Description

This advanced course enhances participants’ understanding and skills of monitoring and evaluation. Focused on developing usable skills, the course helps participants to develop sustainable and cost-effective monitoring and evaluation tools and practices appropriate for their workplace (their projects and institutions). It also seeks to deepen one’s understanding of M&E and their processes, principles, methodologies and tools, which knowledge is vital within the workplace and beyond. The course is therefore designed to deepen intellectual appreciation of M&E and to equip the participant with skills with which one contributes towards improving organisational performance, transparency and accountability.

Participants are encouraged to come to the training with a project for which they can develop a monitoring and evaluation plan.

  • Government and NGOs officials, project managers and project team members
  • M&E officers, development workers
  • Policy-makers and regulators working on projects
  • Any persons interested in M&E

The course helps participants:

  • Deepen understanding of M&E and how these processes integrate with strategic management, project management, policy formulation and implementation
  • Gain knowledge of the principles and practices of M&E
  • Develop readily usable skills for ensuring success of projects and/or other activities
  • Know how to integrate various tools and techniques used in monitoring and evaluation
  • Come up with indicators
  • Carryout project monitoring and evaluation
  • Be able to select and use appropriate methods for data collecting and analysis
  • Develop and structure clear reporting processes that offer evidence for management decision- making
  • Understand how to link M&E findings to policy development processes
  • Consider own role in bringing about improved M&E within projects and organisations
  • Apply advanced competencies to improve or set up a functional M&E framework at one’s workplace
  1. OVERVIEW OF M & E – CONCEPTS and PROCESSES
  • Understanding M&E basic principles
  • Use practical considerations and key steps in developing M&E systems
  • Understanding organisational performance
  • Selecting a work-based problem, opportunity or need for analysis
  • Thinking through projects/programmes to identify inputs, activities, outputs (deliverable) outcomes
  1. STARTING THE M&E PROCESS
  • Work breakdown structure and scheduling
  • Thinking through monitoring and evaluation framework
  • Set up and review monitoring and evaluation framework
  • Align the framework to tools such as work-breakdown structure and Gantt chart
  • Align M&E to institutional, strategic, tactical and operational plans
  1. RMB, LOGICAL FRAMEWORK AND THEORY OF CHANGE (TOC) AND M&E
  • Applying program theory and logic models for systemic planning and evaluation
  • The use of logic models in planning, implementation, monitoring and evaluation
    • Detailed look at: RBM, Logframe, TOC
  1. HOW TO BUILD CAPACITY FOR M&E
  • Form a monitoring structure in relation to work to be done and resources available
  • Link structure with functions/roles
  • Assemble multi-skilled team
  • Understand skills range, strengths and weaknesses of team
  • Train the team on M&E, time management and communication
  1. DESIGNING PROGRAM MONITORING SYSTEMS
  • Choosing key aspects to be monitored
  • Identifying data sources
  • Designing sound data collection and collation tools
  • Developing standard operating procedures (SOPs) for managing data
  1. DEVELOPING MEASURES AND INDICATORS
  • Overview of results-based M&E
  • Importance and use of indicators in M&E
  • Conceptualisation of indicators and other measurement instruments
  • Classification of indicator types
  • Indicators at various stages of the logic model
  • Requirements and characteristics of “good” indicators
  1. DATA COLLECTION & ANALYSIS FOR M&E
  • Research methodology for M&E processes and practices
  • Research design – problem statements and evaluation questions
  • How to conduct surveys and qualitative studies for M&E
  • Understand the difference between process and outcome evaluation practices
  • Designing and selecting monitoring tools
  • Gathering and analysing data
  • How to develop reports
  • Reporting findings
  1. MANAGING DATA QUALITY
  • Data quality principles and rationale
  • Assessing risks to data quality
  • Tools for managing data quality
  • Ethical considerations in dealing with data
  • What are Routine data quality assessments (RDQAs)
  • Tools and processes for conducting RDQAs
  • Communicating and using results of RDQAs
  1. PLANNING AND MANAGING EVALUATIONS
  • Assess readiness for evaluation
  • Budgeting for evaluation
  • Identifying and selecting the evaluation team
  • Developing a comprehensive terms of reference (TOR)
  • Key considerations in managing consultants
  1. 10. DESIGNING EVALUATIONS FOR FIELD-BASED PROGRAMS
  • Evaluation principles and approaches for field-based programs
  • Identifying evaluation questions and developing a learning agenda
  • Selecting an appropriate evaluation design
  • Collecting evaluation data
  1. ANALYSING AND REPORTING DATA FOR DECISION MAKING
  • Developing a data analysis strategy
  • Selecting data analysis methods
  • Presenting analysed data
  • Common pitfalls/errors in analysing data
  • Identifying program stakeholders and their information needs
  • Selecting appropriate communication tools for different audiences

The training approach is highly interactive. It uses a mixture of presentations by the facilitator and by participant(s), group or individual exercises, use of case studies and role plays. These proven learning techniques enhance understanding and retention of covered issues.