- What We Do
- We Unlock Growth
- The Five Keys
- Our Expertise
- Our Packages
A few Plenty teammates attended Do Good Data 2014 in Chicago last week. One of the sessions they attended spoke about the Impact Genome Project lead by Mission Measurement, which is attempting to quantify and benchmark the impact of social service programs by market and sector. The goal is to help nonprofits and others decide which programs to invest in or support. Programs would be equally set against each other with an efficacy score. By level setting programs this way, the score would also speak to the operational effectiveness of the nonprofit, and become yet another resource used to inform donor-giving decisions.
We clearly want our hard-earned money that we give away to make as big of an impact as possible – so I like the idea, in general. One question to consider though is will impact benchmarking destroy the level of transparency this space has been encouraged to reach, or will it create even more transparency? Knowing human behavior, in some instances it may create more transparency, and for others the ghosts may only be pushed deeper into the closets.
Either way, will the added peer pressure of these benchmarks create more busy work for NPOs? Will it create more performance anxiety in staff members already concerned with responsibly stewarding donor dollars for their programs? Will it increase time spent on reporting, cataloging, and bookkeeping? Or increase costs to implement better data tracking systems? All of the above? Perhaps, or it might have the desired effect of encouraging NPOs to take a closer look at how they operate and take action in order to run more effectively as well.
If this measurement has the latter effect, it can even help educate the public about what it actually takes to create change in the world. As we all know, change is rarely easy, straightforward, quick, or cheap. Effecting social impact takes people, time, planning, patience, collaboration, and the big one, money. However, how well does our audience understand that program to program? And how much do they believe, as we do, that there is plenty to go around, as long as we all play our part in contributing to that abundance?
Creating ‘synthetic’ data to help us benchmark expectations for impactful performance of nonprofits might sound a bit far-fetched and too much like applying an inapplicable for-profit scientific technique to a more complex, nuanced, squishy, human-centric field. But, it might just be the bridge we need to make sure that the gravity of the needs that we represent and the immensity of the resources required to right the wrongs of the world aren’t lost on the people that give from their heads more than their hearts. Time will tell. In the meantime, I am interested in your reactions. How do you think the Impact Genome Project impacts you?