top of page
  • Writer's pictureIshaan Guptasarma

Equal Application of Rigor for EA Interventions

Slatestarcodex had an interesting post regarding the equal application of rigor in philosophy. This principle can be applied in a practical sense to effective altruist interventions as well.


Imagine that an established charity evaluator estimates the degree to which mosquito nets protect against malaria, estimates the degree to which a net distribution scheme leads to the actual use of nets among the population, estimates the prevalence and lethality of malaria in the region, combines these numbers to estimate the amount of malaria prevented and the number of lives saved, and thereby comes up with a cost per life saved estimate for mosquito net distribution based on those numbers. Is this number accurate? You can't be certain. Even if all of the above estimates are accurate, the true amount of good that is done might be greater than that. Not having malaria could improve developmental outcomes for children by a quantifiable amount, grant parents greater capacity to earn income and take care of children, shield people from the mental health trauma of losing loved ones in a way that makes a measurable life satisfaction difference, and so on. Your original analysis may not have accounted for this, and if you add it in, your number might change. The true amount of good that is done might also be less than that. You might discover that some fraction of the nets are used for less effective activities like making fishing nets or clothing, or end up being sold on a market where they do less good, decreasing the efficiency. You might discover that helping people earn more raises prices and hurts the neighboring villages, resulting in a small negative effect. Your original analysis may not have accounted for this, and if you add it in, your number might change. The freedom to choose which secondary effects you choose to account for allows you a lot of control over the end result of a cost-effectiveness analysis, regardless of how thorough and accurate the analysis is. The more positive secondary/side/flow-through effects you add to your analysis, the more good it will seem like you’re doing. The more negative effects you add, the less good it will seem like you’re doing.


You might think that when choosing an intervention, you should account for as many positive and negative outcomes as possible, so as to get the most accurate number. While it is good to practice this sort of thoroughness for the sake of improving your knowledge and identifying any crucial considerations which may dramatically influence your effectiveness, when making the final comparison between interventions, you want to avoid a situation where the numbers you get are primarily an artifact of the methods you used to get them. Therefore, when making comparisons between interventions you should try to make sure that your analyses are as parallel as possible. You might not ever be able to say literally how many lives are saved per dollar, but you might be able to say that one intervention was three times as good as another when the two interventions were evaluated side by side using as similar a method as possible. This doesn't solve everything of course - if you switch to a different method, even one applied uniformly across both interventions, 3x better might rise to 10x better, or drop to only 0.5x as impactful. Applying equal rigor will unfortunately not remove all forms of model uncertainty. However, it is a good tool to keep in your arsenal and it will help prevent one form of inaccuracy.


One area where it is especially important to remember to apply equal rigor is when evaluating your own work and your own ideas. When you are directly working on something, you can often see with your own eyes the positive impact that it has and observe some extremely good flow-through effects that may not be captured by your cost-effectiveness analysis. The temptation may be then to adjust your analysis to take the evidence of your eyes into account, come to a high number of cost-effectiveness, and trick yourself into thinking that your intervention is definitely the best use of resources on the margin. However, what's really happening is that you see all the good that your own project is doing in very high definition while you look at other projects in terms of only summary statistics that don't capture everything. If you find yourself comparing your own projects and ideas against others and notice that yours seem unusually good, consider if you're quantifying the good aspects of your own projects with greater rigor due to being able to see them better. A person who gives money to impoverished individuals who they personally know can easily list all the positive effects on each and every one of them: "I gave $2000 to John, and therefore he was able to rent an apartment and get his family medical care, and one of his children had a life-threatening disease which was caught early thanks to this medical care, so, therefore, my cost-per-life-saved is at least $2000", may be an accurate statement. However, this is not a fair comparison because this individual can see everything that happens to John, whereas if for example a parent avoids malaria and, therefore, has more parenting capacity to get her kids medical care which saves their lives, you wouldn't see that particular life saved reflected on a GiveWell spreadsheet. Of course, interventions are often very diverse and evaluating them using the same methods is hard. If they are somewhat similar, you can try to make them comparable (e.g. both vaccinations and mosquito nets protect against disease by some quantifiable amount, and share many similar considerations such as base rate of the disease in the population, lethality, etc). Another way might be to judge all interventions according to their one or two biggest and most straightforward effects (e.g. lives saved) and ignore all smaller indirect effects. None of this is to suggest that indirect effects don't matter or should be ignored. You should try to quantify as many effects as seem useful to your decision. The important thing is to make sure that comparisons between interventions are made with model equivalency in mind.


bottom of page