How can monitoring become thoughtful?
“Cross the river by feeling for stones” - Attributed to Deng Xiaoping
Source: https://pixabay.com/illustrations/woman-root-fantastic-surreal-magic-2391033/
Introduction
The purpose of monitoring is simple: to improve the implementation performance and prospects for lasting impact of aid programmes. With this in mind, it is puzzling that, compared to the resources afforded to evaluation and evaluators, it has been long neglected by the development and humanitarian aid community. (Ticehurst, 2013; Hall & Rogers, 2021). And so much so that one gets the impression that evaluating aid programmes is a more challenging and deserving task than delivering them. Yet, perhaps it is not surprising: some view monitoring as being the same as reporting function that describes, more than it explains, performance.
Thoughtful monitoring aims to simplify the art of learning and adaptation by using a language understood by normal people. An ambition that challenges many in the Evaluation community. The current fashion, to take an example, of inserting more letters to the acronym, M&E, such as L - Learning, A – Accountability, R – Results appears to be more for affect, not effect. This reflects the rather unlettered understanding of M&E and its history among those who put them there, and those who consider this either revealing or helpful.
Source: https://pixabay.com/illustrations/woman-root-fantastic-surreal-magic-2391033/
This article proposes how thoughtful approaches to monitoring can help resolve decision uncertainties managers face in implementation to improve performance. Managers often face such uncertainties when navigating complex and unpredictable environments in pursuit of results. Often, in complex systems the most important things are unknown and unmeasurable. Take, for example, the assumptions. They defy prediction and matter as much as, if not more than, indicators; that is, from a management perspective.
“It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.” — W. Edwards Deming, 1982.
The intended, primary audience for this article is managers and those who work in organisations or programmes as ‘M&E’ specialists (although I question whether such people exist given the different skills and experiences they require!). The article aims to provoke conversations among them about how to develop monitoring approaches that improve the relevance and performance of development and humanitarian aid; that is, approaches that are thoughtful.
More thoughtful approaches to monitoring:
1) Recognise a cure to obsessing about indicators and a plan to measuring results - there is more to monitoring than this;
2) Support monitoring approaches that evolve over time as information needs change as do the needs of client beneficiaries;
3) Ensure that monitoring is driven by the needs of managers and helps them treat the intended client beneficiary, their goals and their culture as king - it is they who legitimise development and humanitarian aid, and it is their indigenous knowledge systems that we need to learn from;
4) Generate lessons that provide evidential reasons for adapting through becoming accountable to client beneficiaries; and
5) Help monitoring approaches and systems better reflect and be attuned to the complex operating environment. One approach or system does not fit all situations. (Snowden, 2020)
This article suggests that many issues to do with ‘M&E’ that programmes face are neither methodological nor technical, but rather organisational and managerial: they centre around the object, positioning of and responsibilities for monitoring. Monitoring is primarily a learning function that needs to be integrated into the structures, functions and processes of a programme or organisation. Of note, into management and associated planning, financing, operations, learning and decision-making. In many cases monitoring, through establishing the ubiquitous ‘M&E” team, sometimes through a third party, is contrived in its design. It and its people, run in parallel to these structures, functions and processes. Resolving this trumps the much cited need of establishing synergies between monitoring and evaluation.
Background
This article was motivated by learning how astounding complexity, and allowing solutions to emerge, is a simple process of trial and error. (Harford 2011). Complex problems seldom require sophisticated or expertly designed solutions. (Rondinelli, 1983).
For the purpose of this article, I understand monitoring (M) to be a very different function to evaluation (E), and in four ways:
1) Main Purpose (M = continuous assessment and learning for programme team; E = periodic assessment, accountability and learning for the donor);
2) Primary information users (M=Managers, E= Donors);
3) Responsibilities; (M=Programme Team, E = Donors); and
4) Requirements for comparative analysis.[1]
The fourth difference is nuanced: monitoring is more focussed on assessing how different beneficiary clients rate and respond to the aid, and how and why this varies across time, between the outputs, people and places. Evaluation is relatively more interested in comparing the differences between those who have and have not received support and between programmes.
The four core questions thoughtful approaches to monitoring seek to answer can be summarised as:
1) On Effort – what has the project team done, at what cost, over what time period and how well was this done?
2) On Outreach - what services and products has the programme delivered across the outputs, in what quantities, to whom, where and at what cost?
3) On How well – with what degree of Quality and Relevance were these services and products assessed as judged by the project's client beneficiary groups(s)
4) On Change – who, among the beneficiary client group(s) is beginning to do what differently by themselves and/or with others?
What is ‘Thoughtful Monitoring’?
Thoughtful monitoring is pursued by developing communication processes within and outside of programmes and organisations in order to help understand clients’ needs and responses. This helps make them accountable to their clients. Thoughtful monitoring also provides space for those on the frontline to listen to and learn from each other through sharing their experiences in providing support, often in difficult environments and hard-to-reach clients. Organisational silence is the single most significant factor impeding an organisation’s ability to adapt and change. (Morrison and Milliken 2000) Thoughtful monitoring is ultimately about developing ways of being accountable to low income / vulnerable people, neglected communities and struggling institutions through listening, gauging how responses vary, feeding back interpretations and taking appropriate action at the beginning (in the form of a baseline and throughout implementation.)
The Basics of ‘Thoughtful Monitoring’
It treats the client, not the donor or shareholder, as king. As such it encourages managers to embrace uncertainty by helping them navigate towards success or failure, learn from this and so evolve. Important no doubt they are, this goes beyond the ambit of affording primacy to the voices of beneficiary clients, being aware of complexity and being reflective by asking evaluative questions. There are other features of “thoughtful monitoring”. Notably in being mindful of:
1) Using mental tools through asking how can monitoring benefit those responsible for implementing programmes, how does it work, what are their information requirements, what decision uncertainties do they face and when?; that is, as opposed to beginning the process by developing a theory of change and/or a results framework;
2) The need to integrate monitoring into existing systems, processes and responsibilities by not confusing a monitoring (and evaluation) system with a data collection plan developed with sole reference to a results framework or ToC;
3) Ensuring the nature, function and processes of monitoring reflects that of the complex and uncertain environment the programme operates in, in particular the need for rapid feedback loops and the consequences of this for decision-making and operations;
4) How the most important information in monitoring interventions in complex systems is unknown. Afford attention and effort to adequately researching and then periodically searching the assumptions before it is too late through ignoring them and waiting to measure the – assumed - result above it in a results framework;
5) Recognising how monitoring can learn from and help bring material value to indigenous knowledge systems, systems that, unlike mainstream approaches to monitoring, embrace complexity;
6) Balancing the need to be as accountable to those the programme supports as much as to those who fund through adapting the results and treating those in need as subjects of conversations that matter to them, not as objects of interviews that matter to the M&E team or the donor; and
7) Giving voice to those who deliver the support so as to share their experiences – successes and challenges – and for senior management to listen to these so as to improve the nature of the support and ways in which they are delivered.
Avoiding the pitfalls of a separate system
Being “part of the team” (Patton 2010), in aiding learning and action, has been an objective that has eluded donors and management for decades (Casley & Kumar, 1988; Guijt, 2008; Pritchett et al., 2013). This is evidenced by the persisting and discrete presence of the ubiquitous Monitoring and Evaluation (M&E) team. This reinforces the silo between monitoring and management. The most important feature of those who perform in such structures is their continued existence. This isolation has been exacerbated through ‘third-party’ monitoring, a contradiction in terms, and a recent rise in popularity of ‘MEL partnerships’, an abrogation of responsibility by management. Spitting roast for M&E companies, however.
The most important question managers and their teams need to ask at the outset is whether there is a need to have a monitoring (or evaluation) professional on the team. A central tenet of thoughtful monitoring is the starting position it should not. The argument to having one, rests on:
a) whether the existing management and those responsible for delivering the support team have the capacity to fulfil the monitoring function; and
b) how well existing decision-making, lesson learning and operational processes enable assessing, listening and learning.
If deemed necessary to have a dedicated person or team perform the role, it is important to mitigate against the risks of isolation, over dependence and the potential irrelevance of the evidence from a management perspective.
Conclusions
Thoughtful approaches to monitoring have many applications for organisations and programmes who work across different themes in development and humanitarian aid – education, health, governance, private sector development, agriculture and so on.
Taking up thoughtful approaches to monitoring is at the discretion of the user. Its distinguishing features can be used a la carte, as it were, and it is not meant to be treated like a recipe whereby all the ingredients have to be used at once. There will, no doubt, be many instances where organisations and programmes only have the need and capacity to take on what they see as the priority areas, and to add, or not, in moving on from there.
In doing so, I would encourage managers and M&E specialists (or teams as they are set up now), with the seven features of thoughtful approaches above in mind, to reflect on the adequacy of their approach.
References
Casley, D.J. and Kumar. K. (1987) Project Monitoring and Evaluation in Agriculture. Baltimore and London: The Johns Hopkins University Press
Deming, W. Edwards (1982). Out of the crisis. Massachusetts Institute of Technology Press
Guijt, I. (2008). Seeking Surprise: Rethinking Monitoring for Collective Learning in Rural Resource Management. Published Phd Thesis, Wageningen University, Wageningen, The Netherlands
Hall. J. and Rogers. P (2021). What do we need for better monitoring? Blog Post. Better Evaluation
Harford. T. (2011) Adapt. Why success always starts with failure. Little, Brown Book Group
Morrison, E and Milliken, F. (2000) Organisational Silence: A barrier to change and development in an pluralistic world. New York University. Academy of Management Review, 2000, Vol 25, No. 4 pp 706-725
Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. Guilford Press, New York. pp. 274-279
Pritchett, L. Samji, S and Hammer, J. (2013) It‘s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space. Centre for Global Development, Working Paper 332
Rondinelli, D.A. (1983) Development Projects as Policy Experiments: An Adaptive Approach to Development Administration. Methuen, London and New York. pp. 1-22 and 66-88
Snowden. D. (2020) Building Scalable Organizations that can Deal with Uncertainty — with Dave Snowden. Boundaryless Conversations Podcast, Season 2 Episode 5
Ticehurst. D (2013 ) Who is listening to who, how well and with what effect? Paper presented at the International Development Evaluation Association conference, Barbados.
[1] Adapted from Casley and Kumar (1987)