Keeping an eye, as I do, on public sector tenders that are being put out to competition among consultants such as me, it seems there is a major push on evaluation currently. This should not be a surprise, as for the public and not for profit sectors, evaluation should be a key tool for a number of reasons:
- ensuring the project is doing what was intended
- monitoring what inputs are going into the project and what outputs are being generated by those inputs
- determining the impact of those outputs
- demonstrating the sound deployment of scarce public resource for the good of the target community
In the major public funding rounds, and I can talk from experience of ESF, ERDF, SRB and NRF, from the very moment of winning these funds, there is a clear expectation that the application of the funds will be monitored and evaluated. (Apologies for the acronyms - the first two are funds from Europe, SRB or the Single Regeneration Budget is mostly complete now, and NRF is the Neighbourhood Renewal Fund). It is also true that the best of the funding recipients build the evaluation process into their projects - allocating funding up front, being clear on what inputs, outputs and impacts will look like, and setting up internal or external processes and resources to ensuring the evaluation can happen.
For a consultant like me, it is great to work with such organisations, as they are clear in what is required, and with the evaluation included in early stages of the project, there are no nasty surprises at the end, when it comes to justifying spend. It is also more cost effective to the public purse, in my view, with evaluator and project group working together through the process to ensure data is collected in a timely fashion, that something can be done if the project is going off track, and there is no panic at the end to prove the benefits.
The evaluations that, if I am honest, make consultants a chunk of money, are those that are tagged on at the end of the project in a panic, once the funds have been dished out, and key players have moved on to the next big thing. The ones where a project manager is reading through their notes of the programme, and gets that awful sinking feeling that there was a final stage to go through - sounds familiar?
Such add-on evaluations, in my view, are never as robust or insightful as evaluations that are part of the programme, but should be able to show the success or otherwise of the funding. Finding the required data and outputs can be challenging and time consuming, with long programmes, the initial ambitions and aims may be difficult to unearth as players and practises move on, and there is always the excuse that the essential information can no longer be found due to filing errors.
Best practise is to build in evaluation from the start - save time, money, resource and worry by doing that; or be prepared for a painful process at the end of the project!
Comments