First post in this series looked at another illustration of the complexity of doing business or ministry in China.
This blog has looked several times at the issue of how complicated life is.
This post ponders complexity in terms of how to evaluate and compare charities that have vastly different operating circumstances.
Complexity of evaluating charities
In an earlier post I had a conversation with William Barrett.
I talked about the variety of charities in the U.S. and how difficult it is to develop a quantitative ranking system that considers the great variation in NPO operations. Consider the following different types of organizations in the nonprofit community and how they generally fare in the rating systems. I have revised the list from earlier discussion. Consider the following circumstances, which is a partial list:
- There are the outliers who aggressively use GIKs, use expensive telemarketers to raise most of their funds, aggressively use joint cost allocation to reduce supporting services costs, and spend an embarrassingly small portion of funds on beneficiaries. They likely get superb grades by the rating agencies. A number of reporters have been providing well-deserved attention to many of these organizations.
- There are NPOs that handle large volumes of medical supplies or other GIKs without using aggressive pricing or debatable variance authority or joint cost allocations that cause head-scratching to readers of the financials. They have very low supporting service costs and as a result get superb ratings.
- There are charities that are building their infrastructure so they can better comply with all the rules or they are spending a lot on fundraising trying to grow so they can provide more services in the future. They are penalized by the rating systems when compared to other charities that aren’t in growth mode.
- Charities who are in the starvation cycle that are not spending the money they need to have an infrastructure that provides sufficient support. They do better by the rating agencies than the average comparable charity.
- The average, average NPO in an average sector gets good ratings but are penalized in relation to peers who are in the starvation cycle.
- There are charities that retain CPAs who understand nonprofit accounting and have some expertise in the specialized issues. Then there are charities that find CPAs who, um, are not, um, quite as informed.
- There are charities that are, um, aggressive in their accounting.
- There are charities in unpopular sectors, or who are controversial, or have poorly recognized brands in the shadow of popular brands, or historically have a challenge to raise funds. Their fundraising ratios will be higher, and thus they get penalized in the ratio calculations.
There are still more types of charities we could consider.
With charities in all those different places, how do you come up with a set of calculations that can reasonably compare them? Here’s a listing in what I think would be the order of best to worst score based on the ratios I perceive are in use today. What ratios would you suggest to evaluate these organizations to reasonably rank them?
- an aggressive GIK-based NPO, several of which we have read about extensively in lots of newspaper articles
- a GIK-based ministry that is non-aggressive in pricing, variance policy, and joint cost allocation
- a typical ministry in a popular sector but is in the starvation cycle
- a typical charity in a popular sector spending appropriately on infrastructure
- a charity that sees tremendous need and is aggressively expanding, which means they are incurring unusually high administrative and fundraising costs for several years in a row
- a reasonably ‘efficient’ charity in an unpopular sector
By the way, I don’t know what ratios would work. I don’t know how to compare those six categories.
It’s complicated.
What do you think? Comments welcome.