That Queasy Feeling ….

Finally, an article speaking some truth about impact evaluation! 

Admittedly, the ability to articulate the impact of an intervention has held my rapt attention for the past couple of years.   Yet, contemplating the practicalities of conducting an impact evaluation makes me queasy. To produce reliable data requires resources, time, expertise and a very liberal tolerance for ambiguity.  And some prudent decision-making. 

Impact evaluation is not for everything, everyone, all the time, in every context, for every program or service.

My queasiness lurches towards nausea when I begin to think about how the data from impact evaluation could be hijacked for the benefit of briefing notes and sound bytes itemizing short-term successes and veiled failures. 

My nausea rushes to the surface of my skin with sweats and shivers when I allow my thoughts to venture into a scenario where impact evacuation is done poorly, delivering false results, producing inaccurate analyses, inevitably leading to wasteful policy decisions.   

Thankfully my tummy settled as I read through the article,   Ten Reasons Not to Measure Impact—and What to Do Instead, by Gugerty and Karlan.

However, anyone expecting any nifty little shortcuts for rigorously and reliably measuring the impact of an intervention will be disappointed because the ‘What to Do Instead’ parts of the article seem a bit anemic. 

More importantly the authors encourage readers to slow down and make time to:

  1. Clearly articulate the type of question you want the evaluation to answer. 
    • Program monitoring questions: we want to learn how well the intervention works.  Gathering data to determine for whom the intervention is working.
    • Impact evaluation questions: take the learning further by asking why the intervention works.   Gathering data to determine why the intervention works for Group A and not so well for Group B. 
  2. Gather monitoring data before conducing an impact evaluation.  Meaning, make sure the implementation of the program is sound before attempting to determine if it has made a difference.
  3. Determine if conducting an impact evaluation is actually worthwhile.  Think about the ways in which an impact evaluation will (won’t) inform the intervention’s theory of change.

Impact evaluation has the potential to profoundly influence the choices we make to better serve people in our communities. 

Unfortunately, if we ignore the cautions set out by Gugerty and Karlan, it presently runs the risk of becoming a more complicated, expensive, soul-crushing, labor-intensive  way to measure outputs.

A Guide to Actionable Measurement

How organizations make decisions has begun to garner more of my interest as I delve further into the murky undertow of impact measurement.  Recently, I came across  A Guide to Actionable Measurement (17 pages) offering a glimpse into what influences resource and fund allocation at the Bill and Melinda Gates Foundation.  It’s clear. It’s succinct.  It’s different.

To begin, they actually articulate why they are evaluating and how the data will be used.

Our philosophy and approach emphasize measurement done for a specific purpose or action. We recognize the most elegant evaluation is only meaningful if its findings are used to inform decisions and strengthen our work to improve people’s lives.

Our approach is driven by three basic principles: 1) Measurement should be designed with a purpose in mind — to inform decisions and/or actions; 2) We do not measure everything but strive to measure what matters most; 3) Because the foundation’s work is organized by strategies, the data we gather help us learn and adapt our initiatives and approaches.”  (Actionable Measurement Guide Cover Letter)

Being able to create impactful interventions to complex problems relies on informative evaluation striking an effective balance between learning (improving something) and accountability (proving something).   Both are needed and valuable for understanding how and why an intervention is effective or ineffective.

At present we are super-proficient at accountability evaluation.  How many?  How often?  Numbers in a spreadsheet.

Unfortunately our evaluation efforts often fail to make meaning from the numbers.   In what ways did reaching the target make a difference?   How did the intervention ‘move the needle’ on the problem it is trying to address? 

Putting together an evaluation approach designed to answer these deeper questions can be stymied by the overwhelming feeling of not knowing where to start or the tendency to build something unnecessarily complicated.  Combing through the Guide to Actionable Measurement has revealed a few tips.

Begin by looking at the language being used to describe the evaluation approach.   The Foundation is intentional about including phrases like ‘strategic intent‘, ‘theory of action‘, and ‘actionable measurement‘.  As an example, using strategic intent over strategic plan has an indelible influence on how the strategy is developed, deployed and measured.

Another manageable place to start is The Actionable Measurement Matrix (Exhibit 4, Page 6 of The Guide). It’s an example of how an illustrative visual can connect activities of a single intervention to the broader strategic intent being deployed to address a complex problem.

Finally, the Bill and Melinda Gates Foundation is careful to acknowledge and measure its role in the creation of the problem being addressed.. Externally, they want to know how their activities as an advocate for policy change have impacted the issues they are try to influence.  Internally, they want to know how their interactions with grantees have impacted interventions and ultimately the problem being addressed.

Etmanski: Think and Act Like a Movement

Al Etmanski’s book assumes readers are striving to achieve real systemic change and his first pattern articulates the importance of movements.  For Etmanski, “Institutional change cannot happen without a movement” (P.49) and “A movement is composed of a million small acts.” (P.48).

This is not new.  Change occurs when enough people are moved to action for a long enough period that it finally happens. Gladwell talks about this type of change in Tipping Point.

The new piece for me was Etmanski’s insistence that systemic change requires us to look beyond our immediate context and missions to the broader goals of the movement. As an example you might identify a gap in mental health support for veterans. To impact the system creating the gap, Etmanski believes you should align your efforts with broader movements like The Movement for Global Mental Health or The Canadian Mental Health Association.

Think back to the occasions when you were involved in developing a mission and vision for your organization.  Was the movement a part of those discussions and considerations?  Was a movement objective devised alongside the mission and vision?

It’s not unusual for us to focus on the local context where we can see the ways our work makes a difference.  Spending time and energy on thinking about how we will contribute to the ‘movement’ seems like an abstract, ambiguous, pointless task.

To keep conversations about developing a movement objective meaningful, Etmanski provides some loose boundaries in his characteristics of an effective social justice movement.

Five Characteristics of an Effective Social Justice Movement.

  1. “They ignite our imaginations” – Do you contribute to a bold vision that disrupts the status quo?
  2. “They are multi-generational” – Do you contribute to movements as they reappear in new forms with successive generations?
  3. “They comprise small acts” – Do you contribute to the same thing that others feel compelled to contribute to?
  4. “They are self-organized” Do you contribute to something in which everyone sees the goal without a central command structure or charismatic champion?
  5. “They marry art and justice” – Do you contribute to something in which art has created new ways of seeing the world and transformed what we see as a possibility?

 

Would you say efforts to help the flood of Syrian Refugees is a movement?  Or the Arab Spring? Or Occupy Wall Street?

 

Impact: Six Patterns to Spread Your Social Innovation by Al Etmanski is a guide for social innovators to move their idea from localized success to broader systemic impact.

 

Making Your Idea Matter

Having repeatedly heard folks in social innovation circles refer to Al Etmanski, I felt compelled to pick up his book Impact: Six Patterns to Spread Your Social Innovation.  It’s intended to guide innovators after their idea has been mushed, mashed, shaped, sanded, polished and tested.

How does a social innovator shepherd an idea from local success to broader systemic impact?

I recommend picking up a copy for his stories illustrating many of the ideas I’ll be posting.  A quick, accessible, useful read.

Etmanski opens by saying we all have the ability, capacity and responsibility to innovate. We can’t escape by saying ‘I’m not a big idea thinker‘ or ‘I’m a doer not a thinker‘. Further, he identifies three types of innovators needed to spread an idea, no matter how amazing, to achieve broader systemic impact.

Disruptive Innovators: have the unwavering belief that we can do better by challenging the the way it’s always been done..

Bridging Innovators: have the credibility and networks to highlight the benefits of the disruptive innovator’s new idea for institutions and policy makers.

Receptive Innovators: have access and knowledge of the system to change policy, law or funding to make the idea possible. (also known as intrapreneurs).

Which type of innovator are you?  Even more importantly, do you know anyone from the other two types of innovator?