top of page
Writer's pictureCE Team

From humans in Canada to battery caged chickens, which animals have the hardest lives


Authors of the research: Karolina Sarek, Joey Savoie, David Moss


After spending considerable time on creating the best system we could for evaluating animal welfare, we applied this system to 15 different animals/breeds. This included 6 types of wild animal and 7 types of farm animal environments, as well as 2 human conditions for baseline comparisons. This was far from a complete list, but it gave us enough information to get a sense of the different conditions. Each report was limited to 2-5 hours with pre-set evaluation criteria (as seen in this post), a 1-page summary, and a section of rough notes (generally in the 5-10 page range). Each summary report was read by 8 raters (3 from the internal CE research team, 5 external to the CE team). The average weightings and ranges in the spreadsheet below are generated by averaging the assessments of these raters.



The goal of Charity Entrepreneurship is to compare the different charitable interventions and actions so that new strong charities can be founded. One of the necessary steps in such a process is having a way to compare different animals in different conditions. We have previously written both about our criteria for evaluating animals and about our process for coming to that criteria. This post explains our process and how the results for this system are being applied to different animal conditions.

One of the goals of our system was to be applicable across different animals and different situations. We ended up comparing 9 animals (Humans, Hens, Turkeys, Fish, Cows, Chimpanzees, Birds, Rats, Bugs). These animals are not based on consistent biological taxonomy due to limited information being available on certain types (e.g. there was enough information on rats specifically to do a report on them, but for wild birds we had to look at a variety of birds to get sufficient data). We are not concerned about this limitation, as most of the interventions we are considering would hit a wide range of animals (e.g. a humane insecticide would most likely not be target-specific, so the most relevant data here is an index for bugs as a whole as opposed to an index on a specific species.)


The reports are formatted so that it is easy to quickly grasp the main information connected with the specific rating. Each report is a summary page with the key information and a short description as to why the given rating, and thus, should be polished and readable to all. Each report was time capped at 1-5 hours, so they are limited in both scope and depth. We are keen to get more information on any of these areas (particularly information that is numerically quantified or related to wild animals, as this information was the hardest to find).


​Sample report:


After each of the reports were drawn up, each summary report was read and evaluated by 8 raters. We tried to get a diverse set of raters but all with a broadly utilitarian and EA framework. Three raters were from our internal CE research team (the staff who created or contributed to the reports) and five raters were external to the CE team, but involved in the animal rights’ research space (e.g. working or interning for EA animal organizations). The CE research team talked over ratings and disagreements openly, but the external raters did not see or disclose any CE ratings until after they had put in theirs. Ethically, people were best described as classical utilitarians, but with some slight variation (e.g. some more prioritarian, some negative leaning utilitarians). We liked the concept of multiple independent raters as there are many soft judgment calls and increasing the numbers of people doing ratings seems to mitigate specific biases and fallacies. This system has also been used before, and to good effect, by GiveWell.


Ultimately, we ended up with a wide range of ratings going from 81 (strongly net positive) to -57 (strongly net negative). Some of the reports were pretty surprising and ended up changing our intuitions (for example, many wild animals were worse off than in our initial views). Others were not that surprising (for example, the rankings of factory farmed hens).


Our full spreadsheet, with all the ratings as well as links to the 1-page reports, gives specific descriptions as to why certain animals and situations received certain ratings. We feel as though there is lots of room to improve these numbers, particularly with deeper investigation into the lives of wild animals. But we limited our time on these reports due to finding that, historically, within our cost-effectiveness analysis, factors like these did not end up carrying the most weight or being the source of highest variability. For example, the cost of an intervention can vary by several orders of magnitude, and more logistical factors were more often the deciding factor when deciding between the most promising looking interventions.


If you want to receive information about our latest reports, subscribe to our newsletter. Once a month we will send you a summary of our progress.




bottom of page