The Constellation Fund develops and uses an evidence-driven framework for assessing the impact of poverty-fighting programs in our community. By leveraging the full power of analytics to harness the best available information, we power each of our grantmaking decisions with objective insights to maximize lifetime impact.
When impact gets measured, what works can grow
How it works
Our Evidence-Driven Approach
Expertly built for our community with a team of Minnesota’s leading economists and research organizations, Constellation’s metrics bring world-class quantitative analysis to local philanthropy in the Twin Cities.
Through our metric-driven approach, Constellation uses the best available evidence – peer-reviewed research, local demographic information, nonprofit data, and more – to carefully value the measurable improvements that poverty-fighting organizations are projected to create for people living in poverty, an evaluation that results in a benefit-cost ratio used to help build a portfolio of nonprofits creating transformational impacts. On average, our grantees are projected to turn every dollar invested into nearly $5 of poverty-fighting impact in our community; watch this brief video for a simplified summary of how we uncover that impact by using evidence.
Get more out of your donations
Smart philanthropy is kind philanthropy. Working in collaboration with our partners, donors, and grantees, we work to shift the philanthropic mindset from feel-good charity to strategic investment by using the best available evidence and community insights to identify and invest in poverty-fighting programs that deliver the highest impact. We’re then able to build a portfolio of the most impactful grantees working across a broad spectrum of interventions to end poverty in our community. This work helps you get the most out of your philanthropic investments.
Why we build metrics
Having evaluated the impact and valued the measurable quality-of-life benefits of a program, Constellation can then create an evidence-based benefit-cost analysis for each prospective nonprofit grantee. This allows for comparisons between the poverty-fighting impacts of otherwise dissimilar intervention types, such as that of an early childhood education program to that of a housing program for homeless veterans.
Once Constellation has conducted these quantitative analyses, we carefully weigh them alongside qualitative considerations in an effort to allocate our resources to the programs with the largest poverty-fighting impacts.
CONSTELLATION’S METRICS ARE
A STANDARD FOR COMPARING OPPORTUNITIES
Metrics allow for the weighting of similar and dissimilar programs.
A TOOL FOR ACHIEVING TRANSPARENCY
Constellation welcomes outside voices to examine, critique, and improve our metrics.
A DIAGNOSTIC DEVICE
What do our highest-scoring grantees have in common? Our lowest?
A METHOD FOR ASSESSING CONSTELLATION
We measure our own impact the same way we measure our grantees: how much poverty we eliminate with each dollar we spend.
CONSTELLATION’S METRICS ARE NOT
THE ONLY CRITERIA FOR MAKING GRANT DECISIONS
Qualitative evaluations and assessments of community needs also influence our grantmaking.
REPORT CARDS ON PROSPECTIVE GRANTEES
A nonprofit can fulfill its own mission without scoring high on Constellation’s metrics.
EXACT AND UNCHANGING
Neither the data we capture nor the formulas we apply are perfect, and the metrics are designed to evolve with additional research and refined calculations.
THE ONLY APPROACH TO SMART PHILANTHROPY
Other funding organizations may employ different but useful standards.
More About Our MetricsYour Feedback
At their core, each of Constellation’s metrics combine three key layers of information to create benefit-cost analyses that measure the expected long-term, poverty-fighting impact for every dollar we invest:
- Peer-Reviewed Research: Constellation uses the best available information. Ideally, this is long-term longitudinal outcome data from nonprofits, but that rarely exists. When it does not, Constellation uses the most credible and relevant peer-reviewed research to help understand the likely measurable lifetime outcomes that a given poverty-fighting intervention will create for low-income individuals and families.
- Counterfactuals: To truly understand the impact of our grants, Constellation compares the outcomes for participants in a given program against what would have happened had they not participated. A school that graduates 95% of its students sounds impactful, but if 90% of those students are high-performers who would have graduated anyway, the impact of the school isn’t as big as it might seem. To account for this, a counterfactual serves as a benchmark against which to compare outcomes. This is often created by leveraging local demographic information (e.g. from census data) to estimate what, on average, is happening in the lives of participants without support, and then subtracting that estimate from the expected outcomes so it captures only the actual impact of a potential investment.
- Nonprofit Data: Constellation uses the information that poverty-fighting organizations generally do have — how many people they are serving, the demographics of those participants, the financial cost of their programming, and more — to tailor the metrics to the realities of the specific organization up for evaluation.
This information (and plenty more) is then carefully built into a metric with the following basic structure:
- Number of participants: This information is provided by the potential grantee about the number of low-income individuals being served.
- Counterfactual: This is based on local demographic information and helps to estimate the net impact of the program (i.e. what would have likely happened to participants without the program).
- Impact Size: Often taken from peer-reviewed research, this represents a measure of how much a given program changes poverty-fighting outcomes for its participants (e.g. percentage increase in graduation rates).
- Monetary Benefit: This is an estimate of poverty-fighting value that a participant would gain from the quality-of-life improvements of a given intervention (e.g. the value to low-income community members of the improvements to education, health, or income). We only value a program’s benefits to members of our community who would face poverty, not benefits to the general public or high-income individuals.
Since a dollar today is worth more than a dollar in the future, Constellation discounts future benefits and values all benefits in today’s dollars. The net present value (NPV) expresses the present value of a stream of future benefits.
Constellation estimates as many of the poverty-fighting benefits of a program as possible using our metrics, then divides the sum of benefits by the total program and administrative cost to calculate a benefit-cost ratio.
The benefit-cost ratio creates a common scale, which allows comparisons of impact between otherwise incomparable outcomes.
Constellation estimates the expected benefits of a proposed program investment based on a general model of the value of the stream of expected future benefits for low-income individuals and families. Each metric seeks to estimate the net present value of outcomes for the participants in a program.
Program impacts are estimated based on evidence from evaluation results of individual programs or average effect sizes from several evaluations. Constellation compares the estimated benefits to the cost for an organization to generate these benefits. The result is a benefit-cost ratio that indicates the value of the poverty-fighting benefits received by participating individuals for every dollar used to run the program. Benefit-cost ratios can be used to compare similar interventions, as well as programs with otherwise non-comparable outcomes.
Each metric follows the following basic structure:
For a proposed job-search assistance program, Q1 would be its average expected impact on potential participants’ employment probability in the first year. Suppose the proposal is to assist 1,000 participants.
If the grant is funded and 1,000 are assisted, then suppose 900 will be employed at the end of the year. If the grant is not funded, then only 500 would be employed after a year. Q1 would then equal a 40% increase in participant employment probability. Suppose that at the end of the following year, 850 would be employed if the assistance program were funded and 600 would be employed if not. Then, Q2 would be 25%. Suppose after that, there was no difference in employment probabilities.
Pt would equal the average earnings among employed participants at t years post-program. Suppose this is $15,000 in earnings beyond what each participant would have received without the program, each year.
Let’s assume a 3% discount rate and 1,000 participants.
Then, our estimate of the value of the proposal’s benefits would be:
These assumptions apply to all the metrics in Constellation’s metrics evaluation framework.
- The Constellation Fund calculates private monetary benefits for program participants, generally for individuals at or below 185% of the federal poverty guideline.
- All monetary values are converted to constant dollars.
- Present values are calculated using discount rates of 2%-4% of future values.
- All earnings are net of taxes. For state taxes, we use effective tax rates by income percentile from the Minnesota 2017 Tax Incidence Study (Minnesota Department of Revenue, 2017).
- For federal taxes, we use federal effective tax rates by expanded cash income percentile reported by The Tax Policy Center (Urban Institute and Brookings Institution, 2016).
- As a general rule, the evaluator should check that the grantee’s intervention matches as closely as possible with the program models observed in the literature. Other matching factors may include characteristics of population served, intensity/dosage of program, and quality of program.
- The general equation does not include any differential weights on dollar gains for participants’ based on income or other factors. The Robin Hood Foundation experimented with using weights on gains but found no basis for choosing the weights, and, regardless, their weighting methods did not affect the relative ranking of grant proposals (Weinstein and Bradburd, 2013).
- The general equation does not include the value of any cash and noncash public transfer payments or benefits received or potentially lost due to increases in income.
- To determine average annual post-program earnings, we must consider potential issues with data reported by programs, including bias/error from self-reported earnings and spotty or missing data. Ideally, data should come from tax returns, verified income from employers, the Department of Employment and Economic Development, or other government agencies.
- Most subpopulation data is based on American Community Survey (ACS) microdata for the smallest relevant geographic area.. If the sample size is too small for a specific subgroup (such as race, immigration status, level of education, etc.), we use data for the full seven-county Twin Cities region or a larger geographic region where needed. If the population of interest is not found in Census data, we use third party reports.
- To determine the counterfactual state of earnings or other outcomes in the absence of any program, we often use average earnings or other status factors of the target population in the Twin Cities area based on Census data. This average value or rate serves as an ad-hoc threshold for program impact.
Minnesota Department of Employment and Economic Development. (2017). Minnesota unemployment statistics. Retrieved from https://apps.deed.state.mn.us/lmi/laus/Default.aspx
Minnesota Department of Revenue. (2017). 2017 Minnesota tax incidence study. Retrieved from http://www.revenue.state.mn.us/research_stats/research_reports/2017/2017_tax_incidence_study_links.pdf
Urban Institute and Brookings Institution Tax Policy Center. (2016). Average effective federal tax rates – all tax units by expanded cash income percentile, 2016 [Model estimates – Distribution tables by percentile]. Retrieved from Tax Policy Center – Model Estimates
U.S. Census Bureau. (2016). American Community Survey 5-year estimates – public use microdata sample, 2012-2016. Generated using Public Use Microdata Area (PUMA) in the Seven-county Twin Cities Metropolitan Area. Retrieved from http://factfinder.census.gov
Weinstein, M. & Bradburd, R. (2013). The Robin Hood rules for smart giving. New York: Columbia University Business School Press. Retrieved from http://cup.columbia.edu/book/the-robin-hood-rules-for-smart-giving/9780231158367
Constellation works alongside leading economists, research organizations, and issue area experts to identify and effectively leverage the available information to build and refine our metrics evaluation framework. This information comes from many different sources: peer-reviewed research, local demographic databases, nonprofit partners, and beyond. This information is then carefully incorporated into Constellation’s metrics.
The building and application of these metrics is labor-intensive and requires considerable expertise. Constellation’s Chief Impact Officer holds a Ph.D. in Applied Economics and is tasked with drafting and applying the metrics. This work is done with considerable support from the Constellation Impact Council, a group of the state’s leading economists who provide ongoing oversight to help ensure the integrity of our metrics.
Constellation actively seeks input and feedback on and improvements to our metrics. We welcome community insights into new research to refine our metrics, ideas for new metrics, potential sources of data, and suggestions on how to refine and improve our methods. We value transparency and are committed to sharing the methodology behind our quantitative assessments. Lastly, we prioritize collaboration and are hopeful that other funders and policymakers will thoughtfully use these tools to inform decisions and help our communities move more effectively toward the shared goal of poverty alleviation.
We encourage funders and policymakers to thoughtfully use these metrics. The process of estimating the poverty-fighting benefits associated with a given program is complex and labor intensive, and usually requires the careful adaptation of each metric to meet the specific characteristics of and available information on the program being evaluated. Put another way: Constellation’s metrics are not “plug and play” or “one size fits all” tools.
Effectively using metrics in the field involves a number of critical and interconnected steps. Constellation works closely with a given organization to develop an in-depth understanding of the poverty-fighting intervention, its processes, and its expected outcomes. We conduct a thorough literature review to verify that there is reliable evidence that the proposed intervention is indeed effective. We identify the characteristics of the populations targeted by the program to create baseline “counterfactuals” that serve as benchmarks against which to compare the outcomes. We determine and carefully organize the program data needed to apply the metrics, which varies based on the program. For instance, when a metric calls for the number of participants, a training program could provide the number of participants receiving the training, but Constellation would also need the average age, level of education, and the content and intensity of the curriculum, among other data; in many cases, these nuances are not explicitly called out in the metrics, so are the responsibility of our trained staff to identify and carefully include. Lastly, we work closely with leading experts – including a Ph.D. economist on staff and the group of leading economists on the Constellation Impact Council – to help ensure that the metrics are assembled and put to use accurately.
For their significant contributions to the development of our metrics, Constellation would like to thank:
- The Robin Hood Foundation;
- Dean’s Office, Humphrey School of Public Affairs, University of Minnesota;
- Constellation Impact Council: Richard Chase, Aaron Sojourner, Judy Temple, and Mark Wright; and
- Wilder Research staff: Amanda Eggers and Jennifer Collins.
Wilder Research, a division of the Amherst H. Wilder Foundation, is a nationally respected nonprofit research and evaluation group. For more than 100 years, Wilder Research has gathered and interpreted facts and trends to help families and communities thrive, get at the core of community concerns, and uncover issues that are overlooked or poorly understood.
Help us be brighter
We are continuously updating and improving our metrics with new research, data, and insights, and we welcome your help. One of the reasons we post our metrics online is to encourage feedback – and pushback – from the many brilliant people in the Twin Cities and around the world.
Share your feedback and tell us of relevant research and useful data you think we should know about. If you see a way to improve one of our metrics, we want to hear about it.