Day 3 - April 25 - Evaluation Use

8 posts / 0 new
Last post
Day 3 - April 25 - Evaluation Use

Thank you all for the resources and ideas that are being shared. Today (Wednesday) I am hoping to hear from you about how evaluation has been used within your organizations, and those you fund. Could you answer one or both of the following questions

1. Would you tell us about the most useful evaluation of an advocacy campaign in the global south that you have ever encountered?

What made it useful?

 

​2. Would you tell us about a time that you, or your organization, have used evaluation data to make a decision about your projects?

 

What did that process look like?

 

I look forward to all of your comments!

#2 - Uptake of evaluation findings

Last year we conducted a program-wide evaluation, as well as evaluations on a few specific projects.

One project-specific evaluation was for a multi-country project on LGBT access to justice. Our internal team (director, program officers, in-country staff and consultants) was engaged from the beginning to develop the evaluation questions, and they were interviewed as part of the evaluation. After the evaluation was completed, we initiated a series of discussions to promote reflection on the findings, focusing on how they could take up these findings at the country level, at the team level, and institutionally. We prepared summary reports and powerpoint decks to facilitate their sharing of findings with grantee partners. At the country level, they are sharing back country level findings with grantee partners and engaging in discussion on how to address the recommendations and what challenges may be anticipated.

At the institutional level, we are making use of the findings in a variety of ways. We’ve engaged a consultant to do a meta-review of all the evaluations. This review is being discussed across the institution and feeding into our institutional strategic planning process. Within the Programs Division, I am facilitating an iterative, structured and participatory process. For each evaluation, I’ve extracted the detailed finding and categorized them. We are currently in the process of prioritizing these findings for action in short, medium and long-term with the programs leadership team and thematic teams.

I imagine that we will spend the year ahead both organically responding to findings, but also working deliberately to address other findings.

Thank you Irit, that meta

Thank you Irit, that meta-review is a fascinating process, and I'm sure very involved! Combining results accross evaluations is not done often in the field, and its great to hear about the ways in which this will feed directly back into organizational planning.

At the internal level, our

At the internal level, our main evaluation tool is a portfolio review, which the staff in charge of a body of work, conducts every two years. It takes the form of a short document where the staff highlights achievements, failures and lessons learned regarding the goals we pursued in the portfolio over the last two years. We include examples of work conducted by grantees as well as decisions we made as a funder. The main question of the review, which is entirely retrospective, is what would we have done differently had we known then what we know now! The document is then presented to a panel of staff from various parts of the organization for a discussion with the portfolio lead about the findings and questions that arose. The exercise ends by looking ahead at what changes we should make to be more effective going forward. This results in some recommendations to adapt the strategy. I have found the process heavy but very useful in looking critically at a piece of work and drawing lessons from sucesses and failures.

Internal evaluations

We also use portfolio reviews to learn and adapt strategy. There is an external evaluation during the final year of a 4/5 year strategic plan. At the two year point we do an internal reflection asking the team to address 4 questions: Does your grantmaking to date align with where you thought you would be in making progress towards strategic outcomes? (If not, what is different about actual experience as contrasted to expectation?); What have you learned about what has worked well and what has worked less well?; If you had one "do-over" what would it be?; Are there strategic adaptations you want to consider?

Thank you both. I love the

Thank you both. I love the strategic portfolio review process to promote learning in your organizations! Thank you for providing those questions Jackie to give us a concrete idea of what is discussed.

My nominee: Treatment Action Campaign

A lot has been written about the Treatment Action Campaign. Their work focused on access to anti-retrovirals for HIV-Positive South Africans and, later, access to quality health services for anyone who needs them.

For me they are a model because the evaluation was not a “one off”. And, it was not an evaluation “study” (although they did have external analysis and evaluations conducted at times). They implemented an evaluation system that combined evaluative thinking, an intentional learning agenda, and documentation.

Many advocates do not like evaluation tools such as logic models. They believe it’s too linear and too reductive. They believe advocacy is too uncertain to map a plan with intended outcomes. The Atlantic Philanthropies’ was a TAC funder; its CEO also was not a fan of logic model and a huge fan of TAC and its leaders. Asked to speak at Atlantic, TAC’s leader spent a significant amount of time talking about how useful their logic model was. He saw it as an important way to clarify thinking and assess alternatives. He did not view it as constraining the work or holding them to a “set in stone” approach. He viewed it as a tool to influence strategic thinking. (Not sure if it completely changed our CEO’s viewpoint but it shifted).

Advocates often have a “go to” strategy regardless of the policy change they are seeking, i.e. grassroots organizing, public awareness campaigns, lobbying. TAC did not use CEI’s matrix in the publication I shared on advocacy theory of change but the result suggests they did something similar. They considered various strategies and landed on an unusual combination: movement building and litigation, a combination that proved key to their success. This is another example of evaluative thinking. 

TAC was intentional about identifying “what we need to learn”. One example is their belief that advocates needed to be knowledgeable about the science of HIV. They established a partnership for regular trainings and knowledge dissemination to staff and advocates. Quarterly meeting provided space for reflection and adapting strategy.

Data collection was integrated as well. Monitoring health system capacity to provide services in different areas allowed them to target activities.
I think evaluation will be most useful for advocates if we can expand their thinking about what it looks like when it is working well and in a useful way. It actually does not look like what most people think of when they hear evaluation.

What an amazing example!

What an amazing example! Thank you so much Jackie for that thorough explanation. Indeed there is so much here that TAC can be proud of. Also, I love your discussion of logic models in advocacy. We have also heard that activists are usually quite averse to the idea of logic models. I think the take on it probably comes down to who decided to use the logic model as a tool. Was it imposed from outside as a requirement, or was it decided on as a useful tool from the inside of a campaign?

I found this article on this case as well, for others' reference - 

https://academic.oup.com/jhrp/article/1/1/14/2188684#33816484