A convincing empirical study in animal rights needs the following:
At Charity Science we’re super excited to hear that over the next 3 years there’ll be $1 million dollars committed to empirical studies within animal rights. This combined with the Open Philanthropy’s announcement that they’ve hired a Factory Farming Program Officer means that the empirical studies of animal rights issues are going to be receiving more money than ever before. This could actually fill the gap in the animal rights movement and take some of their interventions from “my gut feeling is that this works” to “I would bet my house that this works.” But before we proclaim that the end of speciesism is nigh, it’s worth acknowledging that past empirical studies within animal rights have been far from perfect. That’s why in this post we are outlining a checklist of things to do when doing a study in animal rights.
First let’s look at a 2012 study by the Humane League which focused on the effects leafleting had on diet choice. The Humane League is one of ACE’s Top Rated Charities and the Humane League Labs has led some pioneering research, but this study has some significant methodological flaws. That’s because to be able to truly tell the effect of an intervention it has to be compared to a control group - and not just any control group either - it has to be a randomized control group. This means that statistically significant differences between the control and the experimental group can be attributed to being caused by the intervention. In this study, participants were compared to the “average American meat-eater”. But what if the college students involved eat less meat to begin with and were in fact far from the average? That’s why the first item on our list is a randomized control group.
When analyzing the differences between the randomized control and the experimental group, pre-committing to what you’re analysing and how you’ll analyse it is a must. Otherwise humans have an incredible ability to ignore what they actually see in favor of (a) what they expect to see and (b) what they want to see.
For more on what should be included in a pre-analysis plan please see this. Without this pre-commitment studies are more likely to find a false positive.
By torturing data you can get it to say anything you want. If you don’t believe me then check out this paper providing a more concrete example. A pre-analysis plan helps make the widespread problem of p-hacking for statistically significant results much more difficult. Which may help to avoid issues like the possible problems with ACE’s Leafleting outreach study in 2013. I hope that you see why item 2 on the checklist is a detailed, specific and public pre-commitment analysis plan.
To further guard against questionable statistical analysis of the data, the raw data of the study should be made public. That way many eyes can look the data up and down. I really feel like there is no reason not to do this. We’re after what actually works, aren’t we? So the third item is be transparent and let the raw data loose upon the masses.
The fourth item worth mentioning is to focus on metrics that matter. If we’re trying to find out how to help animals the most important metric is animal product consumption and after that perceptions of animals and perceptions of reducing animal product consumption. In fact the latter intermediate metrics can be quite helpful because it’s likely that a larger effect will happen which means we can be more confident detecting an effect there. Lastly, we want similar outcome metrics that can be used across studies to make the comparisons between different interventions very easy to do.
To be able to detect very small differences in these metrics between groups you will need a decent sample size. That might mean thousands or even tens of thousands of people in the study so that miniscule changes in the most important variables can be detected. For instance, with the classic probability level = 0.05 and desired power level = 0.8 and estimating that the proportion of vegetarians at follow up in the control group is 3% and 4% in the experiment group then the sample size will have to be 11,000 people in total. And that’s assuming that you managed to follow up with every single participant! So number 5 on our list is: Have a sample size big enough to give the statistical power to detect a very small effect on metrics. Before going on I should note that researchers should be careful of spillover effects which could contaminate the control group and make it much harder to detect the effect of the intervention. For instance, people that convert to veganism in the experiment group may easily influence and contaminate people that they know in the control group into changing their diet. This can be prevented by having several schools participated instead of several classes. That way the control and experiment group will almost no interaction.
Another thing to be very aware of are two strong biases that constantly come up in these studies. The first is a bias in who participates in the study and the second is a bias in the answers of those who do participate in the study.
The first bias is a selection bias and it comes in the selection of those who participate and the selection of those participants who the researchers follow up with. Basically if people know that a survey is about topic ‘x’, those who are unfamiliar with or uncomfortable with topic ‘x’ may not want to be included in the survey. Or if people know that someone is going to offer them a leaflet on something that they don’t care about, then they’re less likely to take that leaflet.
The selection bias rears its ugly head in the follow up as well. If only 20% of participants are followed up with, they could be very unrepresentative of the entire sample. Often the low proportion of participants who are followed up with in these types of studies makes them particularly problematic. A great study would aim to have high response rates, ideally in the 80% range.
The second bias is a social desirability bias. This occurs when respondents clue in that a particular response is in some way preferred, or more socially desirable. Surprise, surprise - after finding this out they are more likely to give that response. A clear example of this would be if the surveyor was known to hand out flyers or was wearing a vegan t-shirt well giving out the survey.
Thus point 6 is: beware selection and social desirability bias.
Just to recap, the keys to doing a good study in animal rights are:
If these are followed than we’ll see the true beauty of empirical studies, and their results could take us from relying on personal experience, rules of thumb and intuition to having informed and updated opinions based upon the relevant data. As such, RCT studies are the most powerful approach that the Animal Rights movement can use to determine the means by which they can achieve their ends. For that reason, the animal rights movement can only benefit by informing their intervention selection with properly done RCTs. I can only drool at what the possible results could be.
Below is the monthly report for August in which we will re-highlight some key findings discussed in board meetings from the past 6 months.
Switch from board meetings to reports
The first noticeable change would be the switch back to monthly reports due to the fact that monthly board meetings took a lot more staff time compared to having one staff member write the report. It is also advantageous for the interested individuals following our work to see our monthly reports online.
Money moved update
Below are some basic estimates for money moved on a monthly basis. This does not take into account counterfactuals, nor does it include money which we moved that did not go directly through our bank account.
The reason April/May was higher than other months was due to our Living on Less peer-to-peer campaign. We expect to raise considerably more over the later half of the year due to more individuals donating closer to the end of the tax year and more of our time going towards direct fundraising vs. fundraising research.
Growing our team
An area we have had a lot of success in recently has been expanding the Charity Science team. So far, we have trial hired five individuals and interned one. We have learned much about the process of hiring (expect an upcoming post on this topic). The additional staff have allowed Charity Science to progress much faster and for us to try out different areas of fundraising more than otherwise possible. Our newest full-time hire, Kieran Greig, has consistently surprised us with his skill level and ability to learn efficiently and apply higher-level concepts. It is also worth noting that we have had some great volunteers periodically throughout the year, which helped us to complete work at a faster pace. We also have two other employees who are new, one of which has already started and another who will be starting soon.
Growing our funding
We have applied to EA Ventures and received a recommendation of being a strong charity to fund. We hope their funding will allow us to continue to grow our team (from 3 to 6 staff) and budget (from 50k to 100k).
We are now accepting donations and pledges for our 2016 funding needs. Our needs will be in the ballpark of $100,000. We will need new donors to help fill this funding gap.
Effective Altruism Global (EAG)
The Charity Science team gave a presentation at the recent EAG event and met many individuals who were able to offer advice, funding or feedback on our activities. One particularly insightful individual was Jeffrey Brown, from GIF and DIV, who was able to help us expand on the broader picture of effective charities..
We were also able to meet and talk with many members of the GiveWell team. The conversations we had, some of which were with the founders, have changed the ideas we have on how to run Charity Science (e.g. how quickly to expand) and the value we place on different aspects (e.g. value of an external review). Both concepts are elaborated further below.
Regarding the EAG conference, we were somewhat concerned about the focus areas as we found that it felt tilted more towards certain causes over others. This Vox article does a good job explaining some of our concerns.
Changing our board structure
We have decided upon a restructuring of our legal/advisory board. Ideally, we want Charity Science to be able to have a larger advisory board without the legal responsibility or time commitment of being part of the “legal board”. In the future, we plan on moving to a smaller legal board and a larger advisory board. We feel this will allow the best of both worlds in terms of being able to connect into a larger pool of advice without being slowed down by legal board decisions.
Valuing staff time
Another change we are working on is valuing staff time more. After having some conversations at EAG and with the GiveWell staff, I came to the realization that there are few people deliberately working to start effective charities as a way to give the most efficient aid. Many individuals I talked to expressed concerns that Charity Science should be more ambitious and focused on scaling. They also voiced concerns that we spent too much time trying to save money or keeping our ratios high, instead of also prioritizing absolute money moved. This realization has played into several spending decisions (e.g. the value and cost of doing an external review or the costs of spending time hiring staff vs outsourcing).
External vs internal review
Unfortunately, the two most promising individuals we contacted for an external review both no longer have the time to conduct it. It was suggested to us by GiveWell staff that an internal review might be quicker and be more in depth. They pointed out that it’s very hard to get somebody to volunteer their time, and thus not be biased by compensation, who will be able to put in sufficient amounts of time. We are leaning towards deprioritizing this project till after this Christmas.
Update on fundraising experiments
We made reports and used cluster thinking to pick promising areas.
Recently, we did shallow reviews on 20+ areas of fundraising and would like to elaborate on why we picked the three we did to experiment with. We used our staff and board members’ views as well as some soft judgement calls to finalize our fundraising choices.
Legacy giving has the highest average fundraising ratio of any measured fundraising technique - 22:1. It is currently not easy for individuals who want to support GiveWell charities in their wills, but we could set up a system that would make it fairly quick and easy to do so. We have already done research in the area for our will-writing guide and had some interest from other groups regarding coordination on this project.
Raising for Effective Giving (REG) has previously had major success in this area. Affiliate groups are universally recommended by fundraising experts. The Effective Altruism (EA) network could give a new niche organization a large initial member base. Many individuals expressed interest in specific niches and/or general workplace information content in order to allow them have a better explanation of effective charity.
Currently, we have a Google grant that gives us free online ads to test. If successfully managed, they will increase the amount from $10,000 to $40,000 a month. Similar organizations (e.g. GiveWell) have gotten promising enough returns to try to get further grants. We wanted to experiment with a donor acquisition strategy which targets individuals we have not met before, and online ads could sync well with gaining more attention for our new niche websites. If this method works well it could be an incredibly scalable strategy.
Overall, we think these areas are very promising, we expect to have some success, and most of all, we will definitely learn a lot.
This report has noted the key updates which occurred in the last few months. We will be writing to expand on some of the specific conclusions in the future.
At Charity Science we recently updated the look and content of our website. If you’re interested, you can see the new site here. The main content changes were the addition of a page with links to 23 shallow reviews into different fundraising methods and an overview of Charity Science’s past work, key values and plans for the coming months.
These shallow reviews are about 100 pages in total, and are intended to be understandable to someone with no previous knowledge in the area. Reports vary somewhat in quality and style because they were written by many different staff and volunteers. However, all of them were based on the same questions and evaluation rubric.
As for our plans, over the next few months Charity Science will experiment with legacy fundraising, niche marketing and online advertising. We appreciate feedback on the experiment plans for these areas, which are available here. Also, we strongly encourage anyone who is wanting to help out with these experiments to contact us. Programmers, skeptics and financiers would be especially valuable to fine tune the niche marketing experiment plan. If you feel that you can help please email email@example.com.
Regarding the legacy fundraising experiment, a core part of this will be to create a will-writing guide that encourages people to leave money to top charities. Thus far we feel that the best forms for this are a downloadable PDF document or an interactive web application. Examples of these two styles can be found here and here.
We would appreciate hearing what type of will writing guide you think would be better. We will likely create both but your opinion will help us decide how to allocate our time between them. We’ll be sure to seek more community feedback as we create this guide. Ideas about the best ways we can promote it and how likely EAs would be to use it would also be helpful in informing our approach to the legacy fundraising experiment.
Lastly, Charity Science is in the process of expansion and is looking to make a number of interns over the next few months. All interns will receive mentoring by senior staff and the opportunity to join the team in Vancouver. The work involved will likely be quite diverse in nature and will vary considerable depending on the individual’s strengths. Some areas where work will be available include, but are not limited to, research, operations, development and communications. If you’re interested, please send your questions or resume and cover letter to firstname.lastname@example.org. We ask that applicants be willing to commit at least 70 hours of their time to the internship.
Hi, I am Kieran. I joined the Charity Science team in June 2015 after volunteering since February. Hope you're enjoying the new website.
Charity Science recently evaluated more than 20 different fundraising methods to determine which methods would be the most promising to experiment with in the third quarter of 2015. This prioritization task resulted in slightly more than a hundred pages of research about fundraising methods, which will be helpful for other groups considering a wide range of fundraising options. When researching, reviewing and evaluating these methods something that persistently proved useful was the cluster thinking approach. In essence, cluster thinking involves approaching empirical questions from numerous reference frames or mental models and synthesising this cluster of views into one’s opinion.
We attempted to apply this approach when researching fundraising methods by asking researchers to look for and explain reasoning from multiple angles. These angles included but were not limited to:
In addition to having researchers consider a number of different outlooks when completing their research they themselves also represented a set of slightly different perspectives. This stemmed from their differing levels of involvement with Charity Science, experience in fundraising and familiarity with research. However, that didn’t translate into all points proffered being equal and the conclusion reached just an average of them. This wasn’t an exercise in pure “philosophical majoritanism.” Yet nor was a view completely ignored in this decision making process either.
One of the reasons we encouraged research informed by cluster thinking approach was because, as we have previously written, there is no good science on fundraising. Further, there appears to be systemic issues with the available information as it is usually informal and anecdotal in nature, seemingly pervaded by publication bias and rarely acknowledges let alone attempts to answer questions of cross applicability… but there issues for another time. What matters here is that robust expected value estimates couldn’t be made.
Instead constructed expected value estimates were often heavily contaminated with uncertainty. For instance, 95% confidence intervals could span orders of magnitude without even accounting for knightian uncertainty. This didn’t prevent us from completing expected value estimates during the research, rather it caused some researchers and all reviewers to adjust the epistemic weight assigned to them. Within the cluster thinking approach this meant that expected value estimates were just one line of reasoning that was integrated into our overall conclusions about a fundraising method. For and related piece that goes into more depth about Bayesian updating in light of uncertainty in expected value calculations see why expected value estimates can’t be taken in literally even when they are unbiased.
An advantage of treating expected value estimates in this way is that it prevented one poorly constructed or mighty powerful link in their chain from dominating our concerns. For instance, even though some fundraising methods possessed a chance for enticingly high potential returns, as in High Net Worth Individual acquisition and their stewardship, this wasn’t enough to swamp all other considerations. Similarly, possibly remarkably poor counterfactual estimates involved in other expected value estimates will not derail a conclusion that draws on many weak arguments instead of one relatively strong one.
After incorporating elements of cluster thinking at the individual research level and at the individual review stage we also sought to apply this in the final evaluation stage. To do this all Charity Science staff and some board members were invited to individually evaluate each fundraising technique. We asked that each individual didn’t didn’t express their preferences as this would likely influence the ratings of others. After all involved had reached their independent conclusions we then compared differing items and explored areas of disagreement. Throughout this process we were aware of the distinction between fox and hedgehog style thinking and attempted to emulate the fox type because evidence suggests its predictions are more accurate.
From start to finish this process went for ~2 months and wasn’t consistently worked upon. Our conclusion was that the fundraising methods most promising were legacy fundraising, niche marketing and online advertising. In future posts it’s likely we will write more about each method.
One of the biggest projects that is happening within Charity Science is we are trying to figure out which new areas to experiment in. We are looking for areas that a) are most likely to raise the most money possible for effective charities, and b) provide the most learning value.
When we first founded Charity Science, we spent some time researching different areas and considering what to experiment with. Since then we have learned a lot. We need to refresh and expand on our research. To do this we decided to do shallow reviews on different fundraising topics, asking key questions so we can more accurately compare the different options. Although good research is light or nonexistent in most fundraising areas, we still thought we could benefit from spending 10 to 20 hours considering each area across a wide range (around 30) of areas.
A variety of volunteers and charity staff members contributed to making the reports, and they span from very short (less than 1 page) to very long (around 14 pages), depending on area complexity and how promising it seemed. We saved time by stopping research early if the area looked very unpromising.
We will publish both our methodology, our full reports and our relative rankings on this blog so others can benefit from the research we have done. We do not expect these reports to cover 100% of fundraising areas or be perfect, but we do expect them to help individuals and organizations when considering and comparing a wide range of fundraising options. The reports were based on Charity Science and GiveWell-recommended charities so might be less applicable to other cause areas and much larger charities.
The questions and broad structure we attempted to use for each topic area listed below. We also made a evaluation rubric that we gave to staff and volunteers that we found greatly increased report quality.
There is no good science on fundraising, and what to do about it
We have questions about fundraising and have tried to come to evidence-based, expert-based or research-based conclusions. Some examples of questions we have tried to learn about are:
The big problems we have come across with finding answers to these are:
All the Experts Disagree
We have talked to dozens of fundraisers and consultants and read dozens of books and websites on the matter, and experts disagreed on everything. We have found conflicting data and opinions on every strategy we looked at. Just take a look at this graph.
This is a survey of over a thousand fundraisers, and they disagree on virtually everything. It hovers around 50-50 on practically everything. Even the most agreed upon strategy, events, 1 in 5 fundraisers think isn’t effective. So if experts can’t be followed, then maybe we could follow the science?
There’s Very Little Rigorous Science or Data Out There
Almost all of the science out there is observational. For example, the best stuff we could find about the ratios of money spent to money raised was based off of surveying organizations who participated in an expensive benchmarking program. This could have had a huge selection effect. What if only the large successful organizations participated in this program? Maybe they get better ratios than charities just starting out.
This was good observational data though, relatively speaking. Most of the time it’s data like the graph above, where it’s just based on the opinions of fundraisers, saying stuff like, “I think this is effective”. Clearly not the most rigorous methodology.
External Validity is Elusive
But even if there was rigorous methodology, that’s no guarantee of external validity. We are dealing with a crowd that is very different from the average population in a lot of ways. Take for example the famous study showing that people gave more if they saw a cute picture of a girl, rather than being told statistics. This is quite true of the average population for sure, but what about skeptics and intellectuals? There’s a good case to be made that they don’t want anecdotes but want the hard data. That’s what we’re all about after all.
There was another rather rigorously run study on the effects of giving more options or less. Lots of studies have found that analysis paralysis seizes people if they’re presented with too many options. It’s better to have one call to action, rather than to give them some options. But The Life You Can Save just found that, when applied to donors, it’s better to give more options rather than less. So maybe that applies to choosing toothpastes, but doesn’t apply to charities? Or maybe it depends on the population? Or maybe it should be lots of options for small donations, but fewer for bigger ones, because then it becomes stressful? External validity is hard to find in the social sciences and fundraising doesn’t have enough people studying it to come to any sort of consensus.
The Numbers Can Be Misleading
When you find numbers, it can be very exciting until you find out that the numbers only work under certain circumstances. Take for example legacy fundraising (asking people to put your charity in their will). It has the remarkable fundraising ratios of 20:1 or higher, whereas direct mail (sending letters to people about your charity) has much lower ratios, in the range of 1.2:1 . However, you can’t just jump right into legacy fundraising. It would quite presumptuous and untactful to ask somebody to put you in their will after only talking to them once. They have to be long-term supporters who really love your cause and your organization. That takes years of them knowing you and trusting you. Basically, you have to use things like direct mail first to have that donor base of people who might consider putting you in their will.
Experts Don’t Know Why They’re Doing What They’re Doing
When we’ve asked people why they’re doing X fundraising strategy instead of Y, they have given us a puzzled look. They’d then say they didn’t know, it was hard to compare, or because that’s what their boss had told them to do. They didn’t seem to really know why. Why is that? Why don’t people know? That leads me to -
Almost Nobody Keeps Proper Track on Fundraising Metrics
Part of this is because they’re not doing what they should be doing, and part of it is because it’s really, really hard. Maybe even impossible. Fundraising is a lot like marketing, and marketing is very hard to measure. Say you have an ad on a bus about your charity. How do you measure the impact of that? You could add a dropdown menu during the donation phase and ask how they heard about it, but what if they heard about it from two different sources? What if they don’t remember? What if they’ve been reading your blog diligently, then it’s the ad that finally got them to donate? Counterfactuals muddy the waters even further.
Over Half the Advice Out There is Empty Unsubstantiated Buzzwords
A huge percentage of the fundraising gurus out there say a whole lot of nothing. When we’ve interviewed consultants, they say a lot of stuff, but when we ask them “So, what exactly would I have to do today to enact this plan?” they something along the lines of, “Well, first you plan, then you do the plan” or some such empty sentence. Their advice is vague and unactionable. If you then ask why we should do their vague plans, they don’t have any well thought out reasoning.
Why is it This Way?
We don’t know for sure why this is the case, but we can venture some guesses. One reason might be that it’s because fundraising is a social endeavor, and psychology has just barely become a proper science, and that’s debatable in the scientific community. (This is coming from people who have studied psychology too!) Psychology, being a new field, is particularly full of confusion and contradictory knowledge. Additionally, humans are the most complex thing studied scientifically, compared to chemistry or physics, so it might take a lot longer to make any progress. It could be because it’s not popular enough and doesn’t have enough money behind it to get real research. It could also be because of a myriad of other reasons that I am not aware of.
The Light at the End of the Tunnel
It’s not all bad. We have found it comparatively easy to get relatively good information on some facets of fundraising, such as:
It would appear that we cannot simply analyze the all available data already out there, then reliably pick the optimal path. We can do some minor research using the best information available, but ultimately we will just have to take best guesses and rapidly update depending on how things are going. We have tried the other way, and it led to a lot of analysis with very little or no progress. There’s just too little rigorous information out there to make a good decision that way. It’s better to simply try things after a shallow amount of research and do our best to see how well they go.
Of course, this has its own problems. For example, how long should we run the experiment? We don’t want to run the risk of staying on a project for too long when we’re wasting our time. On the other hand, we don’t want to give up too soon on a project that would have worked if we stuck with it. If Michael Jordan had given up after he got rejected from college basketball, that would have been a tragedy. (Well, a first world tragedy). Additionally, as with most skills, you aren’t amazing at it right away. There’s a learning curve and we don’t want to limit ourselves to only things that we are good at right away.
We don’t have an answer to this question yet, but we are working on it. This still seems to be like the best solution given the information sparse environment we are in.
We have now run four peer-to-peer (P2P) fundraising campaigns: the Charity Science walk, birthday fundraisers, Christmas fundraisers, and Experience Poverty. Here are some of the broad lessons we’ve learned so far.
The power of publicity
We previously wrote that publicity was one of the factors we expected to be the biggest in determining success. In our most recent experiment, however, we had much more publicity and many more people joining up, but overall less funds than some other much less publicised events.
Two theories on why this might have happened are:
A) More publicity leads to more fundraisers, but those most dedicated would have participated regardless. This would lead to major diminishing returns on outreach.
B) A huge amount of the money raised for each P2P event was raised by the top three fundraisers. They were not evenly spread out between campaigns, so would account for the majority of the variance.
Bright Spots are really important
As mentioned in the point above, our top fundraisers (or bright spots) made a huge difference in how successful the campaign was. The average amount raised for each event ranged from $160 to $240, with the average of all fundraisers being $200. Out of 300 participants in total, about 40% of all money raised came from the top 3 fundraisers. These were often well off, older than average individuals, raising from work peers and friends. This makes us lean towards a workplace P2P event and workplace giving in general. We have contacted and talked to these bright spots a little bit and plan to talk to them more so that we can determine how to reach out to similar individuals.
P2P fundraisers work great if they are not done too often
We wrote previously on this point, but now after analyzing our data we think this is even more true. Our average fundraiser raised 60% less if they have done another P2P campaign in the past.
We are less confident in matching than we have previously written
Previously we have written that we thought matching was a major factor in our P2P success. However, after running a few more campaigns and comparing the matching and non-matching campaigns, we have found no difference between the two. The spike we got near the end of the campaign that we attributed to matching beforehand occurred during our campaigns where we did not match anything. Additionally some external research also suggested matching was not as strong as conventional funding wisdom would suggest.
We could not find easy ways to increase people's money raised
Matching is not the only thing we looked out for when comparing between different campaigns and different fundraisers. Some individuals campaigns were randomly chosen to be seeded (given a small amount of initial funding) as conventional fundraising wisdom tends to suggest this works well in inspiring more donations. However in our small test we did not find any effects. Another thing we looked for was if a extreme or harder campaign raised more money per individual but we also saw no visible effects. The main factor we did find that seemed to affect it was very similarly how many people the fundraiser contacted and how wealthy the fundraisers group was. How many staff hours went into the events did not seem to visibly affect total money moved (slight negative correlation) although it did correlate well (0.81) with # of people who signed up and amount of money moved with the top 3 fundraisers taken out (0.81). Offering more help for individuals seemed to help in some situations but we could not measure how much. Setting up two levels of challenge did not seem to affect average amount of money raised.
The cost all all P2P events including staff pay (but not counterfactual time) was about $6000 about $125,000 was raised not including any matching, seeding and excluding some but not all counterfactual donations (this is about $350 an hour).
Things to do better
Focus more on the top fundraisers
Fundraising is very top heavy and our top fundraisers brought in most of the money moved. In the future we plan on putting a much larger focus on these indiduals.
Pick a better name
We had a very large pushback on the name “experience poverty” and will spend more time making picking a name in the future.
focus on 1 great p2p fundraiser a year and have an open door for other people doing year round fundraisers
due to P2P fatigue and diminishing returns we are currently leaning towards putting our energy towards one very successful P2P event a year around xmas. Xmas is the clear choice for a number of reasons including it was our most successful event and conventional fundraising wisdom is that many people make their donations at the end of the year around xmas. We plan to leave open ways for people to do year round fundraisers such as birthdays or weddings but we will not promote these as heavily as we did experience poverty or birthdays in the past. We may also set up specific personal campaigns for individuals that have a particularly strong network.
Getting a better P2P system?
We found a few flaws with our system and will consider paying for a different system in the future. It might be worth paying more upfront money for a system that can deal better with different currencies and has lower % based fees. A few non-CS related individuals are also looking into building free systems for effective charities.
This is a blog that details our month to month organizational progress as well as the more technical ideas we have. The RSS feed is just for this content, not for normal blog content.