We have questions about fundraising and have tried to come to evidence-based, expert-based or research-based conclusions. Some examples of questions we have tried to learn about are:
- What would be the best broad fundraising strategy to use?
- How long should you try an experiment on grant writing?
- Should you prioritize social media over creating blog content?
- How do you get more donors onto a newsletter?
The big problems we have come across with finding answers to these are:
All the Experts Disagree
We have talked to dozens of fundraisers and consultants and read dozens of books and websites on the matter, and experts disagreed on everything. We have found conflicting data and opinions on every strategy we looked at. Just take a look at this graph.
There’s Very Little Rigorous Science or Data Out There
Almost all of the science out there is observational. For example, the best stuff we could find about the ratios of money spent to money raised was based off of surveying organizations who participated in an expensive benchmarking program. This could have had a huge selection effect. What if only the large successful organizations participated in this program? Maybe they get better ratios than charities just starting out.
This was good observational data though, relatively speaking. Most of the time it’s data like the graph above, where it’s just based on the opinions of fundraisers, saying stuff like, “I think this is effective”. Clearly not the most rigorous methodology.
External Validity is Elusive
But even if there was rigorous methodology, that’s no guarantee of external validity. We are dealing with a crowd that is very different from the average population in a lot of ways. Take for example the famous study showing that people gave more if they saw a cute picture of a girl, rather than being told statistics. This is quite true of the average population for sure, but what about skeptics and intellectuals? There’s a good case to be made that they don’t want anecdotes but want the hard data. That’s what we’re all about after all.
There was another rather rigorously run study on the effects of giving more options or less. Lots of studies have found that analysis paralysis seizes people if they’re presented with too many options. It’s better to have one call to action, rather than to give them some options. But The Life You Can Save just found that, when applied to donors, it’s better to give more options rather than less. So maybe that applies to choosing toothpastes, but doesn’t apply to charities? Or maybe it depends on the population? Or maybe it should be lots of options for small donations, but fewer for bigger ones, because then it becomes stressful? External validity is hard to find in the social sciences and fundraising doesn’t have enough people studying it to come to any sort of consensus.
The Numbers Can Be Misleading
When you find numbers, it can be very exciting until you find out that the numbers only work under certain circumstances. Take for example legacy fundraising (asking people to put your charity in their will). It has the remarkable fundraising ratios of 20:1 or higher, whereas direct mail (sending letters to people about your charity) has much lower ratios, in the range of 1.2:1 . However, you can’t just jump right into legacy fundraising. It would quite presumptuous and untactful to ask somebody to put you in their will after only talking to them once. They have to be long-term supporters who really love your cause and your organization. That takes years of them knowing you and trusting you. Basically, you have to use things like direct mail first to have that donor base of people who might consider putting you in their will.
Experts Don’t Know Why They’re Doing What They’re Doing
When we’ve asked people why they’re doing X fundraising strategy instead of Y, they have given us a puzzled look. They’d then say they didn’t know, it was hard to compare, or because that’s what their boss had told them to do. They didn’t seem to really know why. Why is that? Why don’t people know? That leads me to -
Almost Nobody Keeps Proper Track on Fundraising Metrics
Part of this is because they’re not doing what they should be doing, and part of it is because it’s really, really hard. Maybe even impossible. Fundraising is a lot like marketing, and marketing is very hard to measure. Say you have an ad on a bus about your charity. How do you measure the impact of that? You could add a dropdown menu during the donation phase and ask how they heard about it, but what if they heard about it from two different sources? What if they don’t remember? What if they’ve been reading your blog diligently, then it’s the ad that finally got them to donate? Counterfactuals muddy the waters even further.
Over Half the Advice Out There is Empty Unsubstantiated Buzzwords
A huge percentage of the fundraising gurus out there say a whole lot of nothing. When we’ve interviewed consultants, they say a lot of stuff, but when we ask them “So, what exactly would I have to do today to enact this plan?” they something along the lines of, “Well, first you plan, then you do the plan” or some such empty sentence. Their advice is vague and unactionable. If you then ask why we should do their vague plans, they don’t have any well thought out reasoning.
Why is it This Way?
We don’t know for sure why this is the case, but we can venture some guesses. One reason might be that it’s because fundraising is a social endeavor, and psychology has just barely become a proper science, and that’s debatable in the scientific community. (This is coming from people who have studied psychology too!) Psychology, being a new field, is particularly full of confusion and contradictory knowledge. Additionally, humans are the most complex thing studied scientifically, compared to chemistry or physics, so it might take a lot longer to make any progress. It could be because it’s not popular enough and doesn’t have enough money behind it to get real research. It could also be because of a myriad of other reasons that I am not aware of.
The Light at the End of the Tunnel
It’s not all bad. We have found it comparatively easy to get relatively good information on some facets of fundraising, such as:
- Donor retention
- How to do a specific thing well (e.g. how to improve social media). Just not how to compare between methods.
- What not to do. For example, a small nonprofits should not do direct mail. (But everyone disagrees on what we should do)
It would appear that we cannot simply analyze the all available data already out there, then reliably pick the optimal path. We can do some minor research using the best information available, but ultimately we will just have to take best guesses and rapidly update depending on how things are going. We have tried the other way, and it led to a lot of analysis with very little or no progress. There’s just too little rigorous information out there to make a good decision that way. It’s better to simply try things after a shallow amount of research and do our best to see how well they go.
Of course, this has its own problems. For example, how long should we run the experiment? We don’t want to run the risk of staying on a project for too long when we’re wasting our time. On the other hand, we don’t want to give up too soon on a project that would have worked if we stuck with it. If Michael Jordan had given up after he got rejected from college basketball, that would have been a tragedy. (Well, a first world tragedy). Additionally, as with most skills, you aren’t amazing at it right away. There’s a learning curve and we don’t want to limit ourselves to only things that we are good at right away.
We don’t have an answer to this question yet, but we are working on it. This still seems to be like the best solution given the information sparse environment we are in.