That Queasy Feeling ….

Finally, an article speaking some truth about impact evaluation! 

Admittedly, the ability to articulate the impact of an intervention has held my rapt attention for the past couple of years.   Yet, contemplating the practicalities of conducting an impact evaluation makes me queasy. To produce reliable data requires resources, time, expertise and a very liberal tolerance for ambiguity.  And some prudent decision-making. 

Impact evaluation is not for everything, everyone, all the time, in every context, for every program or service.

My queasiness lurches towards nausea when I begin to think about how the data from impact evaluation could be hijacked for the benefit of briefing notes and sound bytes itemizing short-term successes and veiled failures. 

My nausea rushes to the surface of my skin with sweats and shivers when I allow my thoughts to venture into a scenario where impact evacuation is done poorly, delivering false results, producing inaccurate analyses, inevitably leading to wasteful policy decisions.   

Thankfully my tummy settled as I read through the article,   Ten Reasons Not to Measure Impact—and What to Do Instead, by Gugerty and Karlan.

However, anyone expecting any nifty little shortcuts for rigorously and reliably measuring the impact of an intervention will be disappointed because the ‘What to Do Instead’ parts of the article seem a bit anemic. 

More importantly the authors encourage readers to slow down and make time to:

  1. Clearly articulate the type of question you want the evaluation to answer. 
    • Program monitoring questions: we want to learn how well the intervention works.  Gathering data to determine for whom the intervention is working.
    • Impact evaluation questions: take the learning further by asking why the intervention works.   Gathering data to determine why the intervention works for Group A and not so well for Group B. 
  2. Gather monitoring data before conducing an impact evaluation.  Meaning, make sure the implementation of the program is sound before attempting to determine if it has made a difference.
  3. Determine if conducting an impact evaluation is actually worthwhile.  Think about the ways in which an impact evaluation will (won’t) inform the intervention’s theory of change.

Impact evaluation has the potential to profoundly influence the choices we make to better serve people in our communities. 

Unfortunately, if we ignore the cautions set out by Gugerty and Karlan, it presently runs the risk of becoming a more complicated, expensive, soul-crushing, labor-intensive  way to measure outputs.

Podcast Pick: SSIR – Whose Story Are We Telling? (Andrew Means)

Before hearing Andrew Means on the SSIR podcast, I would listen to the stories Not-for-Profit organizations tell about their work and couldn’t help but be inspired.  Hearing the exceptional rags-to-riches story about a person experiencing homelessness that used an NPO’s employment program and now manages a local million dollar company would move me to open my wallet.

Another common narrative is based in data.  After hearing that upon completion of the employment program 85% of participants (people experiencing homelessness) became employed, I became convinced the program is ‘doing something right‘.

Look more closely at the stories and you’ll notice they have two common deficiencies.

First, they fail to link the complex nature of the problem to the need for their program.  As a complex problem, homelessness is multi-faceted with causes sprouting from racism, poverty, abuse, family violence, mental illness, and addiction to name a few.  Are you able to articulate how the employment program addresses some of these broader facets of homelessness?

Second, the stories tell us what has been accomplished in the past but fail to articulate why it matters for the future.  Has the story about the employment program taught us about the ways in which being employed will impact people experiencing homelessness in the future?

Means believes our stories need to go beyond our comfortable narratives to include how the program/organization has impacted the broader systemic context.

Yes. Impact.  When I think about measuring impact I am immediately overwhelmed by where to start while remembering past attempts rife with pitfalls and blind alleys.  But Means believes it’s the key to making progress on complex, systemic, nasty, intractable social problems and he has a tidy little formula to get us started.

World with your organization – World without your organization = Impact of your organization 

Tidy to say. Still messy to do.  Fortunately Means gives us a couple tips and some excellent examples to get us thinking about starting.

Counter-factuals:  provide us with an accounting of what would have happened if the organization/program had never existed.  This would mean asking how many of the participants getting a job after completing the employment program would have landed employment anyway.  Then setting this against the 85%.

Displacement: helps us articulate how our work causes ripples in the broader context.  It might mean asking how many participants getting a job after completing the employment program are filling positions otherwise filled by equally qualified people already in the labour market and setting this against the 85% too.

Granted. Quantifying displacement and counter-factuals can be time-consuming and possibly expensive.  But Means is nudging us towards authentically confronting the gap between what we want to accomplish and what we are actually accomplishing. 

The result will be a community making more informed decisions about contributing towards outcomes we actually want to achieve rather than outcomes we pretend we are achieving.

When we join the crowd at the Annual General Meeting, it is with the expectation that we will hear the stories that make us feel like we are in the presence of something that matters.  But the stories are stuck in a rut.  They have a predictable plot involving the usual characters.  Think the movie Star Wars.

Means is nudging us towards telling more complex stories by introducing compelling storylines and new characters illustrating the relationship between complex problems and our work.  Think the movie Interstellar.

 

Whose Story Are We Telling? Featuring Andrew Means from Stanford Social Innovation Review Podcast