Recently, CODE has begun to build a new evaluative learning process for use in both its organizational culture and in its international programming. As part of our ‘Learning Based Management Approach’ to our work we are ensuring an organizational focus on the ‘learning’ that emerges from our education and literacy programming. Through this new evaluative lens, we expect to see a significant increase in our understanding of how we have achieved the results we are known for but have often struggled to document well.
This is obviously not a new thought – the idea that learning leads to improved development outcomes (I wonder if anyone would argue in opposition to this?). What has been the challenge, however, is operationalizing this approach in the face of a global process that often privileges quick evidence of developmental ‘results’. What often follows are organizations who operate in constant fear of losing their funding if these ‘results’ are not produced quickly. Results are important, no question, but we can all point to examples where a preoccupation with results over informed process and capacity strengthening of partners has led to projects that, at best, accomplish nothing. Is it no wonder then that when faced with this pressure our NGO partners often have no option but to try to ‘measure up’ in whatever way they can so to receive continued life support?
From a more charitable view, what often happens is that many program monitoring and evaluation systems are built with the best of intentions but are not sustainable – no matter how generous the funding dedicated to supporting it. Many of us have seen project staff, both at headquarters and in the field, using a monitoring process to collect data because that is what they are supposed to do, not because they see it as being integral or essential to their informed operation of a program.
Our aim at CODE is to build and support evaluative processes that privilege learning based on a process that includes the implementing staff and, hopefully, the communities with whom we work. Both are critical to a meaningful participation in data collection and to fulfilling their part in an informed management process.
CODE’s evaluative learning approach is designed to feed the ‘learning’ aspect of its management process – not to sustain the ‘drive to produce results’ and/or to meet ‘reporting’ requirements – these are necessary but, in essence, secondary. To operationalize this evaluative learning process CODE has focused on three stages.
First, there is Mapping. This stage provides CODE with the tools to strategically map their theories of change with increased clarity. The second stage is Monitoring. CODE is using multiple evaluative activities that are focused on behavioural changes that make it easier to collect, document, analyze and integrate information into daily operations so that CODE and our partners know we are on track to achieve our intended outcomes. Finally, there is the stage of Management. In this stage CODE’s program managers are provided with comprehensive inquiry-based management tools that support learning, risk management and improved outcome reporting. Each of these stages allows project staff and the communities with which they work to participate and contribute to the identification of outcomes and the data collection processes needed to feed their learning process. It is a constant cycle of planning, tracking, reflecting and reintegrating knowledge through planning once again into our programs.
On the technical side, while we have to this point used the usual word and excel document templates to manually ‘collect’ monitoring data, we are currently putting in place the use of simple mobile data collection devices among community members such as teachers, students and their families as well as project staff in the field. These digital ‘tools’ do not merely function as a documenting device but are used to help drive the learning process, opening new areas for involvement and conversation with local staff and communities who are often left out of a still very text-based discussion about ‘their development’.
As famously stated by Richard Branson of Virgin: “I’d rather spend time with people in the field where eye contact, genuine conviction, and trustworthiness are in full evidence” – it is about the conversation not just collection – the same principles should apply to evaluation processes for the sake of learning.
At CODE we are early in this process of evaluative learning being ‘baked’ into our organization. We are however, dedicated to this growth in our ability to learn from what we do and to make sure the ‘learning’ that emerges is re-invested back into program planning. With this improved understanding on what has happened, why it has happened and how it took place, it will prove somewhat easier to report on outcomes (results) that demonstrate the connections between the downstream outcomes of social change and the output related strategies that usually characterize the limits of regular monitoring reporting.
It is always a steep climb at the beginning – breaking out of old habits that are comfortable – but its worth the work in terms of meaningful outcomes achieved and the sustainability of programs through the re-integration of this knowledge back into our programs. At CODE we understand that if we ensure we are learning, we know positive results will follow. We have the map, our journey is just beginning.