Ongoing monitoring and evaluation: Determining indicators of success

Project assessments must be done regularly to provide the team a better grasp on targets and timeframes. The work must not be restricted to the final stage, when the targeted objectives are too late to revise, or they face the risk of being left unaccomplished. Flexibility is a core concern for staying ahead of difficulties that may be encountered with policy implementation. Being responsive to changing environments and emerging risks is crucial to success. In this sense, ongoing monitoring can be of utmost help both to tackle unexpected outcomes and to identify partial achievements.

Although the following module discusses indicators of success, recognizing measures that can be indicative of progress along the plan´s development is important. Provided outcome-specific objectives have been defined at the planning stage, monitoring will be well targeted.

By focusing on the internal understanding of the program, assessing ongoing activities and the results they produce is possible (that is, whether activities are implemented according to the plan and outputs achieved on time). In this sense, monitoring and evaluation (M&E) systems allow observing, measuring, and documenting program outputs, and additional data can be gathered through sample surveys, focus group discussions, interviews, and so forth. In any case, as part of the methodology, a critical function is to identify indicators and to design and select the appropriate tools for data collection so as to establish the results chain (the intended sequence of steps that lead to the expected outcomes). Indicators should therefore target each step of this chain and include both quantitative and qualitative input —for example, feedback and opinions expressed by policy makers and participants along the implementation of engagement strategies (OECD 2013).

Communication can take several forms, depending on the flow of information and ability to response.

Any comments? Please notify us here.

Bibliography

Reed, Q. (2013): “Maximising the efficiency and impact of Supreme Audit Institutions through engagement with other stakeholders”, U4 Issue Nº9, Bergen, U4 Anti-Corruption Resource Centre - Chr. Michelsen Institute.

UNDP (2001): “Governance and Accountability: Progress Report on the Implementation of the Forum’s Eight Principles of Accountability and the Development of Best Practices for Legislatures”, Briefing Paper for FEMM meeting.

GIZ-INTOSAI (2013): Supreme Audit Institutions. Accountability for Development.

O'Meally, S. (2013): “Mapping Context for Social Accountability: A Resource Paper”, Social Development Department, Washington DC: The World Bank.

Velásquez Leal, L. F. (2012): “Manual: Good practices for approaching citizenship”, OLACEFS´ Commission on Citizen Participation.

Heifetz, R., Grashow, A. et al (2009): The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World, Boston, Harvard Business Press.

Contreras, M. (2013): “The World Bank Institute’s Leadership for Development Program”, Presentation Leadership 4D – Catalyzing Change, WBI.

Heifetz, R. A. (1994). Leadership Without Easy Answers. Cambridge, Mass: Belknap Press of Harvard University Press.

Heifetz, R., Grashow, A. and Marty Linsky (2009): The Theory Behind the Practice. A Brief Introduction to the Adaptive Leadership Framework, Harvard Business School Publishing Corporation.

Robert A. Neiman, “Execution Plain and Simple: Twelve Steps to Achieving Any Goal On Time and On Budget” 2004 pg. 105. Robert Neiman was a partner at Schaffer Consulting.

Osiche, Mark. “Applying Rapid Results Approach to Local Service Delivery: Emerging Issues, Lessons and Challenges from Nairobi City Council” in Local Governance & Development Journal Volume 2 Number 2, December 2008: pages 24-39.