Monitoring, Benchmarking, Impact
Ms Ang Bee Lian, 7 February 2017
Dear Social Service Practitioners,
Very often, we hear responses such as “we will monitor,” “why not monitor it for a while?” and “what have we done with the monitoring?”, but what does monitoring mean and entail in practice?
Monitoring is often about collecting information that will help to answer questions about a programme, service or the use of resources. The output of monitoring should enable decision making, and these decisions are best set up upfront to determine what and how we will monitor.
If this is the case, the information can be collected in a planned, organised and routine way. The information can then help the provider or owner to report on progress and help in evaluation.
All programmes and services have records and notes. There should be a structure to discuss what staff are doing and facilitate supervision. This simple checking becomes monitoring when information is collected routinely and systematically against a plan. The information might be about activities or services, users, or about outside factors affecting the programme or project.
A Good Monitoring System
Information for monitoring is collected at specific times: daily, monthly or quarterly.
Here are some basic points for a good monitoring system:
- collect data at the most natural point of everyday activities, and get commitment from those collecting the information by explaining why they are doing it and by building a simple, user-friendly system
- make sure that everyone responsible for monitoring has clear and consistent guidelines
- make sure that monitoring records are completed fully and accurately – people may not regard it as a high-priority activity
- provide feedback on the results of monitoring to those collecting information, and explain how it is being used to make the programme or service more effective
- check that the programme or project is not collecting the same piece of information more than once.
It is important to ensure that the data being collected is useful and being monitored against a plan because much effort is spent on data collection. It is also essential for staff to know what data currently exists, and how the data they collect, when analysed, links back to decision making.
Benchmarking
Benchmarking is commonly associated with monitoring, and it is common to have people ask what and who to benchmark against. As with most exercises, there can be different approaches to benchmarking. In its simplest form, it can consist of two people meeting at an event and discussing the way their programmes or services are marketed, or how they recruit staff, and then using this knowledge to improve their processes. However, some will not consider this benchmarking as it is more a case of compare and contrast. In the social sector, much of the semblance of benchmarking takes the form of sharing and learning. This approach can be a quick and easy method to learn good practices and to share solutions to common problems.
As a more structured process, benchmarking would usually take the following four stages:
Stage 1: What areas to compare
Agree on the areas which would benefit most from a comparison with others, and agree on which agencies to benchmark with
Stage 2: What information to gather
Gather appropriate
information about current performance or practices
Stage 3: What similarities and differences
Share this information with each other and reflect on any similarities and differences which are highlighted
Stage 4: What to improve in the agency
Decide what changes are needed in the agency
So when is Benchmarking appropriate?
(a) When the agency spots a weakness or problem with the way an area in the agency is functioning (perhaps something which has been highlighted through a SWOT (strengths, weaknesses, opportunities, threats) analysis or review of services). Benchmarking can be useful if the agency is not sure how to go about improving this area and is looking for some fresh ideas.
(b) When the agency is considering trying out something new, and wants to find out if others have tried something similar; the agency can then know of the potential pitfalls and how to avoid them.
(c) When the agency is aware of a process or procedure which takes up a lot of staff time or resources, and wonder if other agencies have found a better way of doing things.
It is useful therefore to identify an area where improvement is needed, and consider if it is to benchmark a process, a service delivery system or a cost. The aim of benchmarking may be to help in learning and adapting, to increase productivity by saving on resources or to give clients a better experience.
What do we mean by Impact?
Another related concept in discussing monitoring and benchmarking is impact. And we hear a variety of meanings about what impact is. Most will say that impact is about outcomes, and outcomes refer to the more direct benefits or effects of a programme, or service brought about by an intervention or the introduction of a service or programme. However, we need to realize that outcomes can sometimes happen with or without the intervention. Impact is the change in outcome that the intervention causes over and above what people would have accomplished on their own. It is the wider and more long-term consequences of actions on the social, economic and physical environment. It is usually about the development in the long-term.
For many, outcomes are changes in behaviours, skills, knowledge, attitudes, conditions or statuses. They are related to the core business of the programme, and are realistic and attainable within the programme’s sphere of influence. Most would ask that outcomes be developed as Specific, Measurable, Achievable, Relevant and Time bound (SMART). Therefore, what is important is for a programme or service to agree with the funders and commissioner of the service on the level of change or prevention of deterioration expected and then determine the information that is useful and feasible to collect. It is therefore useful to be realistic about achievements and to be able to realistically measure the level of attainment by the intervention.
The practical thing is to identify a realistic level of change that could be reasonably associated with the activities of the programme. For example, the development of effective plans or the delivery of the service. Measures such as ‘improved emotional well-being’ or ‘improved quality of life’ are hard to measure and require good longitudinal studies.
It is more realistic to collect data about the intermediate outcomes achieved. These could be data such as one’s readiness for work, how skills and confidence have improved or the level of increase in exercise.
How does one then improve on the analysis? If you are dealing with enough numbers, segment the data according to user profile, frequency/ length of use of the service, access to other services/ training and other important factors as this information may suggest significant differences. The information may also throw light on any bias in the sample. For example, older people may not have responded in the sample. This then needs to be further examined. This cross analysis may then help in understanding what some of the key factors in achieving improvements are.
It is therefore useful to consider what aspect of the service or intervention was most or least useful or important, what other factors were important in achieving change and what factors acted as barriers to achieving change.
At the heart of this is recognising that it may not be possible to establish cause and effect, or to attribute the change entirely to the programme. What you may be doing is establishing how you contribute to a web of interventions which together enable more long-term and sustainable change.
When referring to benchmarking, outcome and impact, it is still more constructive to describe the meaning and parameters for purposeful discussion as various parties may bring with them different meanings. Good programmes and exchange of views often take place when ideas and concepts are kept simple, when contributory and causal factors are clear and when achievement or credit are kept circumspect, especially when we are delivering human services.
References:
- Khan, N, U. (n.d.). Monitoring & Evaluation. Management Organizational Polices & Practices. Retrieved from https://www.scribd.com/document/96298330/Monitoring-Evaluation-Document
Download the full letter here
MS ANG BEE LIANDirector of Social Welfare (1 Nov 2013 - 30 Jun 2020) | |