top of page
Untitled 10.png

PUBLICATIONS

Using the Participatory Monitoring Approach, Most Significant Change, for an Anti-Corruption Program

By Sandra Sjögren and Cheyanne Scharbatke-Church

In this post, Sandra Sjögren and Cheyanne Scharbatke-Church discuss why they chose to use the Most Significant Change approach in monitoring an anti-corruption program in the DRC, the monitoring process, and in what ways it did (or did not) fit their needs. Amongst other things, their experience showcases some of the advantages and drawbacks of participatory monitoring.


Programming on sensitive issues like corruption, gender based violence, or sex work are challenging to effectively monitor due to social desirability bias, the illegal nature of some of the acts, and the challenges of discerning perceptions from reality, to name just a few of the hurdles that have to be cleared. For the program we were working on, (Kuleta Haki: a network of criminal justice officials and actors in Lubumbashi, DRC who are working to diminish corruption), there were several additional factors that had to be taken into account for our monitoring to inform ongoing program decisions.


Our monitoring choices had to contend with the highly volatile political climate due to the Presidential election in DRC, exacerbated by the prime contender coming from the region where our program was running – Katanga. Further complicated by the fact that we were working in the justice sector which is a key pillar for maintaining political power – illicitly and legally. Finally, the nature of our program – a group who will have solidarity and strength to stand up to the established system of corruption – and the safety of participants were of highest importance. To add to this contextual challenge, our team on the ground was new to the principles central to this engagement: Theory of change programming, adaptive management and participatory monitoring.


What is Most Significant Change (MSC)?

MSC, developed by Rick Davies, is a form of participatory monitoring. It captures the changes that are valued by program participants; instead of the changes that we (implementers) feel are important as found in pre-set indicators. Free MSC manual is available.


Why MSC was selected

There were several reasons that MSC rose to the top of our options list. Some of these included:


Accurate indicators highly unlikely

Developing a program with crisp and detailed change pathways that would unfold as expected was not realistic in such a volatile environment, particularly with an innovative program design. Without this, generating accurate indicators that truly signaled the change we sought over time was highly unlikely.


We wanted information on assumptions

The program design contained a number of assumptions, not all of which were agreed upon or explicitly articulated. Therefore, we were looking for a method that could clarify, and correct, assumptions on how and why change was happening and help us think through what to do next.


Ownership of participants in the program was a key operating principle

Ownership of the Kuleta Haki Network by its members was critical to program effectiveness and sustainability, and we wanted our monitoring to support this principle. So we needed to find a participatory process that involved the Network in the conversations on effectiveness.


Dissatisfaction with more traditional monitoring approaches

There are numerous national and international programs that seek to support the justice system in DRC. Most of them are based on a “classical” approach that consists in reinforcing justice professionals’ capacities. In parallel to this “classical” approach, the M&E system consists of following indicators set from the beginning on “hard” assumptions. As an example, the logical framework is a popular M&E tool in justice support programs. As explained above, the highly volatile context and the complexity of defining clear assumptions encouraged us to use an alternative M&E approach to build our program design. MSC gives the program flexibility: the more we learn from MSC, the more the program adapts, and is able to start answering fundamental questions that previous justice programs have not explored, e.g. WHY and WHAT creates change in behavior.


‘Fit’ between program participants and approach

Justice professionals are accustomed to writing and developing arguments so we assumed they would be comfortable documenting and explaining their stories. The written aspect also provided the added benefit of greater confidentiality on a sensitive topic.


What we actually did

We will reflect on what worked or not in our next post, but first we thought it would be useful to lay out the process that we used.


1. Internal discussion on the methodology

The first step was to make sure all team members had clarity on the steps in the MSC process and the rationale behind them. Next, we reviewed the standard MSC process in light of our context to identify any modifications or concerns which brought to the fore two issues. First, who should identify the most significant stories? We liked the trust building potential of asking the whole group to choose together. However, we ended up asking a select set of members to choose the most significant stories instead, in respect of participants’ time.


The second, and far more challenging issue, was validating the stories, a standard part of the regular MSC process. After much discussion, we concluded that it was impossible to validate the veracity of the stories through additional data collection. For instance, a story may be about how a network member refused to take a bribe to acquit someone in their court. To validate this would require seeking out the individual who offered the bribe and assuming they would provide an accurate telling. Not only did we feel this was unlikely, but we were also concerned about the impression it would give the Network ‘story tellers’ if we started ‘validating’ them so publicly. Would it seem distrustful? Disrespectful?


Ultimately, we used the power of the group to validate the stories during the feedback session by asking more detailed questions (how did this happen, who was involved, etc.) Our hypothesis being twofold: (1) The justice sector is small so telling a total fabrication without someone in the room being suspicious was unlikely. (2) We felt that if participants told the same story twice it was more likely to be “valid.” (Unfortunately, it wasn’t possible to repeat the process for each story.)


2. Testing the methodology

In year one we organized two MSC sessions to allow enough time for the program team to get familiar with the methodology. Before the “real” MSC session, we organized a “test” session with a small group of volunteers who were all part of the leadership group of the network in order to gauge the reaction to the approach and make any adjustments. One of the most important things that came from this experience was improvements to the prompt questions. For example, participants recommended that we clarify questions in terms of timeframe, e.g. “since your involvement in the project” or “since your involvement in the network”. We ended up with the following questions:

  • From your point of view, tell me about the most significant change that has resulted since you were involved in the project? This was a purposefully open ended and broad prompt.

  • Since your involvement in the network, what do you think was the most significant change in your behavior about corruption or resisting corruption? This prompt was more focused as behavior change was a key component of the program design. It was followed up with two more questions:

    • Why was that significant to you?

    • Do you feel this came from your involvement in the network or from other things that you are doing?

3. The first MSC session

For the first session, we felt the entire Network would be too large of a group, and some members were brand new, so the MSC process wasn’t yet appropriate for all. As a result, the team generated a list of regularly attending members and then called to ask if they would like to participate; resulting in 12 participants. After the introduction and explanation, participants had 30 minutes to write their stories. A selection committee (1 program team member and 3 network members) picked three stories based on criteria established prior to reading all the stories. The criteria — coherence, level of detail, reference to network — were proposed by the program team and amended by the participants. The selection committee read the three stories to the whole group. After the stories were selected, we had hoped the group would discuss the meaning of choosing these stories. In reality, the discussion was more of a self-reflection (e.g. people said ‘oh this happened to me too’)


4. The second MSC session

We adapted the second MSC session following what we learned in the first session in a few ways. For instance, we made further tweaks to our prompt questions as the participants felt they were too similar to each other in the first session. We also improved our guidance to the group on how to answer the questions. Finally, we shortened the time available to the selection committee, to provide more time for the story writing and discussion at the end.


Tweaking the process in this way had several positive consequences. For instance, the stories were more developed than in the first session, though they still missed some personal details and details on how and why change happened. The improved guidance made the exercise clearer from the beginning, making it take less time. However this time when the selection committee presented the stories they had selected, participants felt more comfortable with the process and argued on the content of the stories. Instead of analyzing the selection process, participants raised questions on the stories they heard and reflected on them, asking themselves what they would have done instead. The team responded by stressing the fact that every participant should write the story as he or she lived it and that there was no judgment to be made.


Did MSC meet our needs?

In many ways using MSC was a terrific learning experience for the team and the network. The process generated useful information on what mattered to the membership and what changes took place. This gave the implementing team greater insight into programming activities that should receive greater emphasis going forward and vice versa. It also gave concrete examples of where change was happening. For instance, in the first session a lot of attitude or ‘realization’ type changes were told; such as, I have awoken about how corruption harms my country and me. Conversely, in this session very few behavior change stories were told which was helpful for the implementing team to understand the consequences of the work to date.


Further, the process did well in supporting the notion of Network ownership and we were correct in our sense that it was a good ‘fit’ for our audience. Conversely, we did not manage to identify prompt questions that would create consistent insights into how and why change happened.


Do you have insights into using MSC on sensitive topics?

We would be very interested to learn how others have handled validation when dealing with sensitive topics, like resisting corruption. If you have tactics that worked, or not, we would welcome learning of them.

 

What’s next

For more reflections and lessons learned, see the post Reflections on using Most Significant Change in an anti-corruption program.

Header image: An adapted snippet from one of the participants’ stories.


 

About the authors

Sandra Sjögren, MA, has coordinated development programs for multiple international organizations in the Great Lakes region including Search for Common Ground, Heartland Alliance International, Physicians for Human Rights, and more recently, RCN Justice and Démocratie. In addition to her background in program management, Sjögren has extensive experience in monitoring and evaluation in social field. She has conducted qualitative research studies on reintegration of formerly incarcerated persons in the United States, and she supervised the monitoring and evaluation methodology as program coordinator in the justice sector. Sjögren holds a MA in International Studies from the University of Oregon and a BA in political science from the University of Paris. More information can be found on her LinkedIn account, here.


Cheyanne Scharbatke-Church is Principal at Besa: Catalyzing Strategic Change, a social enterprise committed to catalyzing significant change on strategic issues in places experiencing conflict and structural or overt physical violence. As a Professor of Practice, she teaches and consults on program design, monitoring, evaluation and learning.

コメント


bottom of page