top of page

Evaluation and Measurement

This module focuses on understanding strategies and models for engaging in short term outputs measurement and long term outcomes evaluation, with a consideration of qualitative and quantitative practices

JPMC Logo.png

DATE

Friday, April 17, 2026

TIME

10:00 am - 11:30 am CST 

PRESENTERS

SMR new headshot.jpeg

Managing Director at Envoy 

Salomon Moreno-Rosa

Salomon is a Managing Director at Envoy. He uses his background and expertise in nonprofit administration and policy development to inform nonprofit operations, philanthropy, and strategic planning engagements that help drive organizational goals and advance local outcomes.  

ENV_266-1.jpg

Alejandra Piers-Torres

Manager at Envoy 

Alejandra is a Manager of Strategy & Philanthropy at Envoy. She brings experience in local government, public/private partnerships, and program development to support social impact initiatives. Alejandra holds a BA in International Relations and Hispanic Studies from Brown University.

ENV_176-1.jpg

Rothschild Toussaint

Associate at Envoy 

Rothschild is an Associate in the Strategy & Philanthropy sector at Envoy. He brings experience in economic development, affordable housing, research, and policy analysis. He employs a mixed methods research and data-driven approach to tackling social challenges. He holds a BA in Economic Geography from Dartmouth College.

Session Agenda

  • Welcome & Ice Breaker​

  • Review of Last Module​

  • Module 5 recap

  • Guest Speaker​

  • Breakout / Discussion​

  • Closing and Next Steps​

Resources

JSMF resource

This document has been prepared by the JSMF foundation team to support organizations that are navigating different elements of their partnership.

Evaluation Guides

This handbook is to educate busy nonprofit directors and staff about the essential elements of evaluation, so you can work more effectively with trained evaluators; hold evaluators accountable to the highest standards of quality, integrity and competency; and maximize the usefulness of evaluation to your organization.

This guide is a starting point for planning and implementing program evaluations. It provides practical applications of the CDC Program Evaluation Framework to any program evaluation. Although the CDC Program Evaluation Framework is often discussed as a linear process, conducting program evaluation iterative. This Action Guide is intended to support your current evaluation need(s) and help you develop robust program evaluations.

A maturity framework (sometimes called a maturity model) is a structured description of the characteristics and stages of improvement i.e. to describe what poor, good and great look like. Maturity frameworks, evolved primarily from the quality management field, aim to simplify complexity and enable organizations to be measured and compared.

This document presents the framework, and an initial set of recommendations on implementation of the framework. Subsequent phases of the Actionable Evidence Initiative will continue working with the broader Actionable Evidence Network and a targeted Community of Practice to expand opportunities, use cases, guidance, and tools for applying and evolving the framework to achieve more equitable educational outcomes.

The program logic model is defined as a picture of how your organization does its work – the theory and assumptions underlying the program.A program logic model links outcomes (both short- and long-term) with program activities/processes and the theoretical assumptions/principles of the program.

The Pell Institute's Toolkit for Equitable Evaluation is specially designed for professionals who work with college outreach and student support programs that are interested in conducting small-scale, high- quality evaluations of their programs. The contents of this Toolkit will help determine the effectiveness of program practices and generate recommendations for program improvement, refinement, and success.

Read through the self-reflection questions below. Imagine someone you trust is asking you these questions, and answer any that feel useful to you. These questions are intended to both spark and help organize your thoughts and insights. No need to answer every single question! Focus on the ones that pique your interest.

Move beyond the initial and external reasons for success and failure. Use this worksheet as a tool to push yourself and your teams to deeper learning and reflection.

Failure Reports are a great way to begin work on a culture of speaking honestly, openly, and productively about failure. By publishing a Report, you demonstrate that your are dedicated to learning, innovation, and risk taking.

What's your true appetite for risk and innovation? Mark your current resource levels with a dot on each corresponding line. Connect the dots to see a picture of your current appetite.

Is your organization maximizing learning, innovation, and resilience? Highlight the sections below that most sound like you. How well do you do?

Evaluation is the systematic collection of information about a program that enables stakeholders to better understand the program, improve its effectiveness, and/or make decisions about future programming.

The basic steps of nonprofit measurement and evaluation are straightforward: • Define what outcomes and related metrics matter most, based on the organization’s theory of change. • Measure the metrics by gathering quantitative and qualitative data. • Learn and improve based on the data you collect.

The Equitable AI Adoption (EAIA) project aims to fill that gap. To this end, Project Evident is surfacing, creating, and disseminating practical stories of early adopters to understand their AI implementation and distill broadly applicable insights through the development of an EAIA framework and related case studies. This project sought to deeply understand what the core of equitable and effective AI implementation in the social and education sectors looked like by drawing on the experiences of nonprofit practitioners.

This chart shows how AI has continuously transformed the different aspects of philanthropy through individual, organizational, and industry usage. It describes how AI use exists but informally — uneven, invisible, may be minimized by leadership.

The goal of this tool is to provide grantees and the broader field with a research-based instrument to promote organizational capacity self-assessment. State commissions and other intermediaries may find this tool particularly helpful in working with subrecipients to identify capacity strengths and areas for support. The tool is designed to be a conversation- starter within an organization and between organizations engaged in a technical assistance relationship.

Evaluation capacity building involves developing the motivation, knowledge, and skills for conducting evaluations at the individual and organizational levels. As such, it refers both to the ability to use evaluation information and to conduct evaluations effectively.

They can measure financial or non-financial criteria that reflect an organization’s, program’s, or initiative’s efficacy. They’re derived by carefully defining outcome indicators, data-collection methods, analytical techniques, and presentation vehicles that collectively show a rich picture of organizational performance. These outcome metrics may go by many names and fit in countless categories. Many nonprofits obtain their best results by measuring across multiple dimensions for blended scorecards that encompass activities, capacities, financial results, and other metrics. Ultimately, well-defined outcome measures help organizations to continuously adapt and improve.

Dashboards, like any report format, are limited in what they can accomplish. To provide meaning and insight, a dashboard report needs to be understood and used within the context of effective governance practice and organizational planning and evaluation.

Presentation slides used during April 17th session

envoy-logo-animation.gif

© 2025 by Envoy Advisory LLC.

  • LinkedIn
bottom of page