That Queasy Feeling ….

Finally, an article speaking some truth about impact evaluation! 

Admittedly, the ability to articulate the impact of an intervention has held my rapt attention for the past couple of years.   Yet, contemplating the practicalities of conducting an impact evaluation makes me queasy. To produce reliable data requires resources, time, expertise and a very liberal tolerance for ambiguity.  And some prudent decision-making. 

Impact evaluation is not for everything, everyone, all the time, in every context, for every program or service.

My queasiness lurches towards nausea when I begin to think about how the data from impact evaluation could be hijacked for the benefit of briefing notes and sound bytes itemizing short-term successes and veiled failures. 

My nausea rushes to the surface of my skin with sweats and shivers when I allow my thoughts to venture into a scenario where impact evacuation is done poorly, delivering false results, producing inaccurate analyses, inevitably leading to wasteful policy decisions.   

Thankfully my tummy settled as I read through the article,   Ten Reasons Not to Measure Impact—and What to Do Instead, by Gugerty and Karlan.

However, anyone expecting any nifty little shortcuts for rigorously and reliably measuring the impact of an intervention will be disappointed because the ‘What to Do Instead’ parts of the article seem a bit anemic. 

More importantly the authors encourage readers to slow down and make time to:

  1. Clearly articulate the type of question you want the evaluation to answer. 
    • Program monitoring questions: we want to learn how well the intervention works.  Gathering data to determine for whom the intervention is working.
    • Impact evaluation questions: take the learning further by asking why the intervention works.   Gathering data to determine why the intervention works for Group A and not so well for Group B. 
  2. Gather monitoring data before conducing an impact evaluation.  Meaning, make sure the implementation of the program is sound before attempting to determine if it has made a difference.
  3. Determine if conducting an impact evaluation is actually worthwhile.  Think about the ways in which an impact evaluation will (won’t) inform the intervention’s theory of change.

Impact evaluation has the potential to profoundly influence the choices we make to better serve people in our communities. 

Unfortunately, if we ignore the cautions set out by Gugerty and Karlan, it presently runs the risk of becoming a more complicated, expensive, soul-crushing, labor-intensive  way to measure outputs.

My Starting Point for Measuring Impact

Over the holidays I was with friends I hadn’t seen in a while and a few did not know about my MS diagnosis and how it lead to volunteer work with EMCN.

“I’ve be helping guide one of their teams through the human-centered design process” I explained “by convening social innovation labs and nudging them to think beyond ‘the way they have always done things‘”

“MS has limited the number of productive hours I have in a day” I continued “so I want them to matter.  This means I’m also keenly interested in evaluating the impact of what we do”

 “Sounds interesting!” one person said “how do you measure the impact of an initiative?” 

A good question to which I realized I did not have a concrete answer.  I alluded to examples I read about and when I still received a glazed look of incomprehension I moved on to technical evaluation jibber-jabber I heard in podcasts. As a last resort I distracted the person with the snack tray.

Which leaves me with an unanswered question. So I started looking more closely at what I actually want to accomplish.

If I were going to dedicate my limited hours in a day to measuring the impact of an initiative, what would that look like?

As of today, this is where my thinking has landed me. (Initiative includes program, project, service, initiative…..)

One.  The initiative swims in the constant churn of complex dynamic challenges.   

Two.  The initiative gains momentum from learning about the ways in which it is not making a difference in people’s lives.

Three.  The initiative is emergent.   

Four.  The initiative is fueled by possibility.

Five.  Any evaluation of the initiative will need to embrace One, Two, Three and Four.

A Guide to Actionable Measurement

How organizations make decisions has begun to garner more of my interest as I delve further into the murky undertow of impact measurement.  Recently, I came across  A Guide to Actionable Measurement (17 pages) offering a glimpse into what influences resource and fund allocation at the Bill and Melinda Gates Foundation.  It’s clear. It’s succinct.  It’s different.

To begin, they actually articulate why they are evaluating and how the data will be used.

Our philosophy and approach emphasize measurement done for a specific purpose or action. We recognize the most elegant evaluation is only meaningful if its findings are used to inform decisions and strengthen our work to improve people’s lives.

Our approach is driven by three basic principles: 1) Measurement should be designed with a purpose in mind — to inform decisions and/or actions; 2) We do not measure everything but strive to measure what matters most; 3) Because the foundation’s work is organized by strategies, the data we gather help us learn and adapt our initiatives and approaches.”  (Actionable Measurement Guide Cover Letter)

Being able to create impactful interventions to complex problems relies on informative evaluation striking an effective balance between learning (improving something) and accountability (proving something).   Both are needed and valuable for understanding how and why an intervention is effective or ineffective.

At present we are super-proficient at accountability evaluation.  How many?  How often?  Numbers in a spreadsheet.

Unfortunately our evaluation efforts often fail to make meaning from the numbers.   In what ways did reaching the target make a difference?   How did the intervention ‘move the needle’ on the problem it is trying to address? 

Putting together an evaluation approach designed to answer these deeper questions can be stymied by the overwhelming feeling of not knowing where to start or the tendency to build something unnecessarily complicated.  Combing through the Guide to Actionable Measurement has revealed a few tips.

Begin by looking at the language being used to describe the evaluation approach.   The Foundation is intentional about including phrases like ‘strategic intent‘, ‘theory of action‘, and ‘actionable measurement‘.  As an example, using strategic intent over strategic plan has an indelible influence on how the strategy is developed, deployed and measured.

Another manageable place to start is The Actionable Measurement Matrix (Exhibit 4, Page 6 of The Guide). It’s an example of how an illustrative visual can connect activities of a single intervention to the broader strategic intent being deployed to address a complex problem.

Finally, the Bill and Melinda Gates Foundation is careful to acknowledge and measure its role in the creation of the problem being addressed.. Externally, they want to know how their activities as an advocate for policy change have impacted the issues they are try to influence.  Internally, they want to know how their interactions with grantees have impacted interventions and ultimately the problem being addressed.

Michael Quinn Patton Visits Edmonton

Michael Quinn Patton, a leading influencer and thinker in the realm of evaluation, was recently in Edmonton to give a presentation on Developmental Evaluation.  The day was packed to the rafters with learning but here a few things I found most salient.

The Evaluation Gap

The way programs are experienced is often disconnected from how they are evaluated.  Logic models are designed to emphasize the way the program is experienced by the individual and neglects how the program is experienced by the group.  In particular, relationships and social capital generated as a group is not usually a measurable outcome articulated in the logic model.  How well the program works for the group can be overlooked by how well the program worked for the individual in the group.

Lessons vs. Lessons Learned

Lessons are knowledge.  Lessons Learned are actions based on the new knowledge.

Formative vs. Summative vs. Developmental

Evaluation in a reasonably predictable context using previously tested models occurs formatively and summatively.  Formative assesses project/program progress as it unfolds.  Summative assesses the overall performance of the project/program to determine if it will be continued or altered.  It answers the question; How do we improve the model?

Evaluation in a dynamic, unpredictable context without previously tested models requires a more flexible, iterative approach.  Developmental evaluation is a constant process of trying various approaches and learning lessons via case-based reflective practice.  It answers the question; How do we change what is occurring?

 

A couple other Patton nuggets from the day:

“Emergence is when people find each other and opportunities emerge from the connection”

“You can have specific outcomes and targets only when you know how to produce them”