Measurement pathways and evidence for impact

Schorr, L. (2012). Broader Evidence for Bigger Impact. Stanford Social Innovation Review, 10(4), 50–55.

No matter the sector trying to measure impact or the approach taken, understanding what types of evidence are credible, rigorous and reliable is a critical factor.  In this 2012 article Lisbeth Schorr (Lecturer in social medicine at Harvard University and Senior Fellow at the Centre for the Study of Social Policy) presents a case for a broader understanding of evidence and common ground for improving evidence for impact.

The article is framed around the debate between experimentalists and inclusionists. Experimentalists argue that credible and rigorous evidence is the result of scientific methods such as randomised clinical trials (RCT) which aim to identify a causal relationship between an intervention and outcome. RCTs are often lauded as the ‘gold standard’ of evidence. Inclusionists reject the solely scientific view. They advocate for a broader and richer understanding of evidence that better reflects the realities of social interventions and the complexities of wicked problems.

Schorr argues for the middle ground. She maintains that users of evaluations and impact measurement need to acknowledge the complexities of social programs and the methodological limitations of scientific approaches and thus take a broader view of evidence. Schorr emphasises the importance of high quality evidence that focuses on outcomes for users, communities and society. And she warns against relying solely on evidence of past performance because it overlooks shifting political, economic and social contexts and rapid developments and innovations in service delivery.

To assist organisations to effectively navigate the measurement space for improved evidence and impact, Schorr presents a pathway of four fundamental principles that is relevant for both camps:

  1. 1.     Begin with a results framework: Start with a results framework to identify clear and measurable results for users and communities. This should be developed through a collaborative process with stakeholders and should include clarity and agreement around: purpose, commitment to data, accountability of results, responsibilities and structure for the evaluation.
  2. Match evaluation methods to their purpose: As identified in Barraket & Yousefpour (2013) it is important to understand the reasons why you are measuring and what you will use the evidence for before deciding on an appropriate method.
  3. 3.     Draw on credible evidence from multiple sources: Where available, use existing sources of credible and quality evidence. These can include program evaluations, academic studies and reports.
  4. Identify core components of successful interventions: Identifying and understanding the core components of effective interventions and how they can be adapted to your organisation’s context can provide evidence for improved performance.

 

Stephen Bennett
Research Officer, the Centre for Social Impact

Post a comment