Comprehensive M&E for Power Generators-africa

Comprehensive M&E for Power Generators

Overview

Monitoring is vital for informed decision-making (evidence-based decision-making) as it helps in gauging implementation progress against what was planned; aligning actual implementation, budgets, quality and scope with what is in plans. Monitoring entails the collection and analysis of data relating to what is being implemented largely pertaining to progress or lack thereof, quality, scope and budgetary issues. Evaluation is an equally important information generating tool for informed decision-making. Evaluation can be carried out prior to implementation to give an insight into the status quo (unravelling the nuanced reality obtaining which intervention seeks to alter) and ex-post project implementation to gauge the effect of the project on those acted upon and to draw lessons to inform future implementation. In essence, M&E help us gauge the value we are generating from the efforts we put into the various programmes and projects we finance through careful tracking and evaluation of the outcomes and impacts of our projects and programmes. Results based approaches help attempt answering questions such as:

  • What key outcomes and impacts were formulated around a project or programme?
  • Have these impacts and outcomes been achieved?
  • Are these outcomes and impacts sustainable and replicable in other contexts?

Comprehensive M&E for Power Generators – Customised (Pretoria, South Africa)

This Garvey Africa Institute M&E course provides the fundamentals of planning and implementing programme-based monitoring and evaluation. The course places emphasis on designing and implementation of monitoring and evaluation systems. In so doing it enhances participants’ understanding of monitoring and evaluation. Focused on developing usable skills, the course helps participants to develop sustainable and cost-effective monitoring and evaluation tools and practices appropriate for their workplace (their projects and institutions). Thus, it deepens one’s understanding of M&E and their processes, principles, methodologies and tools, which knowledge is vital within the workplace and beyond.

Participants are encouraged to come to the course with a project for which they can develop a monitoring and evaluation plan.

  • M&E Officials
  • Project Implementors
  • Government Officials
  • NGOs officials,
  • Project Managers and Project Team Members
  • Development workers
  • Policy-makers and regulators working on projects
  • Any persons interested in M&E

The course helps participants:

  • understand central concepts and practical approaches for performance monitoring and evaluation
  • gain hands-on experience in designing M & E systems
  • know how to set up an M&E framework
  • understand how to develop and track performance indicators over the life of the project
  • strengthen skills for evaluating a project against set results
  • develop and implement a comprehensive monitoring and evaluation system
  • understand the role of stakeholders in monitoring and evaluation
  • know how to link M&E findings to over organisational strategic plans and policies

1. OVERVIEW OF M&E – CONCEPTS and PROCESSES

  • Introduction
  • Understanding M&E basic principles
  • Thinking through key steps in developing comprehensive M&E systems
  • Understanding organisational performance
  • Selecting a work-based problem, opportunity or need for analysis
  • Thinking through projects/programmes to identify inputs, activities, outputs (deliverable) outcomes

2. STARTING THE M&E PROCESS

  • Project structure o Work breakdown structure and scheduling
  • Thinking through monitoring and evaluation framework
  • Set up and review monitoring and evaluation framework
  • Align the framework to tools such as work-breakdown structure and Gantt chart
  • Align M&E to institutional, strategic, tactical and operational plans

3. Comprehensive Monitoring and Evaluation Framework

  • M&E framework design
  • Implementing the M&E Framework
  • Evaluation

o Outcomes

o Impact

  • Cost-Effectiveness Analysis
  • Collecting different types of monitoring data

o Individual project dimension

o Program dimension

o Core components of an M&E system

4. M & E Approaches: RMB, Logical Framework and Theory of Change (TOC)

  • Results-based
  • Logical Framework
  • Theory-Based
  • Applying program theory and logic models for systemic planning and evaluation
  • The use of logic models in planning, implementation, monitoring and evaluation

o Detailed look at: RBM, Logframe, TOC

4. HOW TO BUILD CAPACITY FOR M&E

  • Form a monitoring structure in relation to work to be done and resources available
  • Link structure with functions/roles
  • Assemble multi-skilled team
  • Understand skills range, strengths and weaknesses of team
  • Train the team on M&E, time management and communication

5. DESIGNING PROGRAM MONITORING SYSTEMS

  • Choosing key aspects to be monitored
  • Identifying data sources
  • Designing sound data collection and collation tools
  • Developing standard operating procedures (SOPs) for managing data

6. DEVELOPING MEASURES AND INDICATORS

  • Overview of results-based M&E
  • Importance and use of indicators in M&E
  • Conceptualisation of indicators and other measurement instruments
  • Classification of indicator types
  • Indicators at various stages of the logic model
  • Requirements and characteristics of “good” indicators

7. DATA COLLECTION & ANALYSIS FOR M&E

  • Research methodology for M&E processes and practices
  • Research design – problem statements and evaluation questions
  • How to conduct surveys and qualitative studies for M&E
  • Understand the difference between process and outcome evaluation practices
  • Designing and selecting monitoring tools

o Designing questionnaire’s using ODK for collection of Data

  • Data Collection and Management
  • Data analysis and generic excel report
  • Statistical (Probability Model)
  • Making projections using Generation data
  • Advance excel
  • How to develop reports
  • Reporting findings

8. MANAGING DATA QUALITY

  • Data quality principles and rationale
  • Assessing risks to data quality
  • Tools for managing data quality
  • Ethical considerations in dealing with data
  • What are Routine data quality assessments (RDQAs)
  • Tools and processes for conducting RDQAs
  • Communicating and using results of RDQAs

9. PLANNING AND MANAGING EVALUATIONS

  • Assess readiness for evaluation
  • Budgeting for evaluation
  • Identifying and selecting the evaluation team
  • Developing a comprehensive terms of reference (TOR)
  • Key considerations in managing consultants

10. DESIGNING EVALUATIONS FOR FIELD-BASED PROGRAMS

  • Evaluation principles and approaches for field-based programs
  • Identifying evaluation questions and developing a learning agenda
  • Selecting an appropriate evaluation design
  • Collecting evaluation data

 

11. ANALYSING AND REPORTING DATA FOR DECISION MAKING

  • Developing a data analysis strategy
  • Selecting data analysis methods
  • Presenting analysed data
  • Common pitfalls/errors in analysing data
  • Identifying program stakeholders and their information needs
  • Selecting appropriate communication tools for different audiences

The training approach is highly interactive. It uses a mixture of presentations by the facilitator and by participant(s), group or individual exercises, use of case studies and role plays. These proven learning techniques enhance understanding and retention of covered issues.