top of page
  • Writer's pictureCE Team

Poverty Research Organization

This is one of our charity profiles, where we present our shallow, preliminary research on a potential, promising charity idea. We believe that this idea could be a potential contender for a GiveWell top charity, if further research confirmed the idea and if someone started the charity, executed it well, and resolved some of our outstanding questions and reservations.

Basic idea Conduct high-quality research establishing which global poverty interventions are the most promising. Summary Cost-effectiveness: Low-High -- While it may be possible to estimate the historical cost-effectiveness of global poverty research, we are not aware of any estimates and we would be concerned about whether they would generalize to our specific case. It’s possible for research to be highly cost-effective but there are far too many unknowns. Strength of Evidence: Low-Medium -- The case for the impact of global poverty research is intuitively strong. However, the causal chain is long enough that it ought to be established by empirical evidence before it’s thought to have a strong strength of evidence. Counterfactual Scalability: Low-High -- There are a considerable number of competent organizations conducting global poverty research. It may be better to support them than start another organization in this space. However, we feel there’s possibly a large quantity of valuable high-quality studies that can be done. Scaling this research organization could be difficult and there’s some risk that no one acts upon research we complete. Also, scale matters less for this kind of organization as it could be possible to have a very large impact with just a few number of studies. Ease of Testing: Low -- It would be very difficult to measure the impact of research, let alone quantify impact on endline metrics. Flexibility: Medium-High -- A research organization can more easily change its research focus than a direct charity can change its programs. However, a research program would still be limited to doing research. Logistical Possibility: Medium -- We feel that producing high-quality research may be more challenging than implementing a specific charity idea. We also may lack the formal academic credentials required to implement a research organization. Additionally, there may be a difficult lobbying component to ensuring others take our research into account. Why We Think This Could be an Effective Opportunity While some scholars have attempted to empirically quantify the impact of developed world medical research (e.g., HERG, OHE, & RAND, 2008), we are not aware of anyone who has quantified the impact of global poverty research. However, we see the case for such research as intuitively promising. GiveWell writes in their list of charities they’d like to see that they would be excited to see “[c]harities that collect or generate information and data relevant to [their] recommendations.” For example, GiveWell sees more information on large-scale bednet distributions implemented by groups other than the Against Malaria Foundation (AMF), salt iodization programs, and tetanus immunization programs as valuable, but seem not to know of organizations collecting this data. They also write about a desire for more randomized control trials (RCTs) pertaining to their priority programs that could potentially be more cost-effective than their current top charities. Similarly, our initial intervention research identified some potentially promising ideas that we ultimately eliminated because they had a poor evidence base. This problem could be solved with more high-quality global poverty research. The principal benefit of performing global poverty research is likely the chance of finding a charity more cost-effective than GiveWell’s current top-rated charity, AMF. As of the end of 2015, AMF has an estimated cost-effectiveness of $2838 per life saved (see GiveWell’s analysis) and receives $22.8M from Good Ventures (see GiveWell’s blog post), which results in an estimated ~8000 lives saved. However, if our research could identify an underfunded intervention with a cost-effectiveness of $2000 per life saved, $22.8M could save ~11,000 lives. Our research could then be taken to have contributed toward saving an additional 3000 lives. If we assume a typical “gold standard” RCT costs $500K[1] and that two such RCTs would be needed to establish an existing pretty close charity as more cost-effective than AMF, and if we assume that we hit on the right charity 20% of the time, it would have taken $5M to save those 3000 lives, which works out to an estimated ~$1500 per life saved. Of course, the assumption of a 20% success rate of discovering charities with a cost-effectiveness of $2000 per life saved or lower is a complete guess and could be wildly inaccurate. Moreover, there still would need to be a lot of additional work and due diligence on the part of GiveWell, or some similar funder, and our research would only be a part of that. A Possible Implementation Plan A research organization could interview those founding (e.g., Evidence Action, us) or reviewing (e.g., GiveWell, the Global Innovation Fund) effective charities and compile a list of studies that would most benefit these reviewers/founders. This list could then be prioritized by other research organizations so that the highest-priority studies are done first. Replications It’s well known that replication research is a neglected but very important area of research. We’d love to see replications of studies that significantly inform views of priority programs. For example, replications of studies analyzing cost-effectiveness of deworming interventions could verify its impact. We think external validity is a concern when attempting to scale an intervention that has impact evidence in one particular area to significantly different areas. For this reason, we’d also be interested in experimental replications of studies in different contexts. Research on highly promising interventions While researching particular interventions, we found a lot of unanswered questions that would be amenable to future research. One such example is conditional cash transfers (CCTs). The large variation in CCTs -- in incentive delivery and amount, and which action is incentivized -- create an immense opportunity for research. Data on intervention costs Commonly global poverty research aims to correlate an intervention to a positive outcome and it seems usually there’s less investment in researching the cost-effectiveness of that program. When reviewing literature for interventions, we found that research elaborating on intervention costs to be quite useful in advancing our understanding of the intervention’s relative potential. While GiveWell has provided some considerations against more investment in cost-effectiveness estimates, outlining problems with these estimates being wrong, highly sensitive to assumptions, and not reality-checked, better reporting of costs could help correct these three problems. More endline metrics We have written previously about the dangers of measuring “incomplete metrics” and have argued our concerns that measuring improvements in IQ could be one of these metrics. We’d like to see more studies focus on endline metrics that are directly connected to well-being, such as income, health, and subjective well-being. More high-quality “gold standard” studies GiveWell has laid out a case for focusing on more “gold standard” studies that are well-designed and well-executed, which help fight against the problems of drawing misleading conclusions based on other research that all tend to have the same flaws. While conducting high-quality studies is difficult, we think there are some relatively simple techniques that will ensure increased study quality. Pre-registration can mitigate inappropriate data analysis, soliciting peer feedback can improve the experimental design, and large sample sizes can reduce the chance of an underpowered study. Who is Already Working in this Area? Many organizations already provide high quality studies, such as the Institute for Health Metrics and Evaluation, IDinsight, the Center for Global Development, the World Health Organization, the Center for Disease Control, Cochrane, the Campbell Collaboration, the Abdul Latif Jameel Poverty Action Lab, Innovations for Poverty Action, and the International Initiative for Impact Evaluation. Other organizations help curate, summarize, and synthesize these studies, such as Copenhagen Consensus, the Disease Control Priorities Project, Our World in Data, and GiveWell. Though there are many global poverty research organizations, we see the potential value in further studies as enough to justify there’s ample room for more research and data collection in the area of global poverty. Reservations Our biggest reservation is the large number of unknowns in committing to research. It seems that this process involves a substantial gamble on finding a “white whale” of a highly cost-effective intervention or charity with very slow feedback loops. Not only might this task be very difficult, it may be impossible. We also put some weight on the idea that it could be easier to produce high-quality research just by implementing a particular intervention. For any charity idea we select to implement, we already plan to produce pilot studies and eventual RCTs as the idea allows. Additionally, we’re nervous about how we’d acquire funding for starting and scaling our activities. Certainly we could hope to get funding from those who would potentially consume our research, such as GiveWell or the Open Philanthropy Project. However, we think there could still be concerns for seeking large amounts of funding, as a conversation between GiveWell and Development Media International notes that “[t]here are not many funders that support this type of [research] work [because m]ost foundations prefer funding projects that maximize the number of lives saved, rather than evidence-gathering work”. Lastly, we are also unsure of whether people would be willing to act on our research once we produce it. From speaking with experts, we have found that many large aid departments in different countries may be slow to update practices based on new evidence, if they update at all. It seems that for as much effort as we put into producing high-quality studies, we may have to put equal or more effort into lobbying relevant decision makers to utilize the findings, which could be very hard to do. Remaining Questions

  1. Are there enough highly evidenced yet underfunded interventions that we have enough to fund and work on, without needing further investment in research? How soon could we expect these interventions to run out of room for more funding, if they exist?

  2. How much money could we influence toward interventions that appear more positive in light of our research? If we found a study that casted doubt on a particular intervention, would funders notice or care?

  3. How many interventions or charities don’t currently have enough evidence to be recommended, but could become recommended if the evidence was there? How large are these relevant evidence gaps?

  4. How much will lacking formal academic credentials beyond undergraduate degrees prevent us from producing high-quality research? How much would it hurt the reputation of our research?

  5. Is there any research connecting high-quality global poverty research and people in poverty receiving more effective aid?

Endnotes [1]: This is a high-end guess based on a few figures: Replication grants from 3IE are $15K at maximum plus $10K to the author to prepare their study for replication; IPA RCTs range from $50K to $500K; and 3IE funds up to $100K for a systematic review.


bottom of page