The Dangers of Effective Altruism
Effective altruism’s technocratic worldview narrows our moral imagination and helps sustain human and animal injustice. Philosopher Alice Crary argues that effective altruism (EA) and longtermism, both shaped by Silicon Valley’s techno-utopian fantasies, ignore social structures of oppression and offer either incremental welfarism or galactic transhumanism over genuine animal and human liberation. Highlights include:
Why effective altruists treat concerns about personal integrity as a form of self-indulgence;
How EA's welfarist approach functions as a band-aid that allows its billionaire funders and organizations like Open Philanthropy, Animal Charity Evaluators, and Faunalytics to avoid confronting the larger systems generating animal and human oppression;
How longtermism - rooted in Silicon Valley transhumanist ideology - both preceded EA and continues to shape EA's pro-technology stance;
What longtermism’s emphasis on maximizing numbers of future off-planet, posthuman beings reveals about its disregard for present-day ecological and social crises;
Why EA and longtermists are some of the biggest champions of declining-birth-rate panic;
Why longtermist transhumanism reframes existential risk not as threats to social justice or planetary limits but as anything that obstructs an off-planet, techno-utopian future - including growth-limiting social and ecological movements.
MENTIONED IN THIS EPISODE:
-
Alice Crary (00:00):
A standard formulation of Effective Altruism is that it's a program for rationalizing charitable giving, something like positioning individuals to do the most good per expenditure of time or money, but the notions of reason and evidence that they're using aren't common sensical. And the story of its rise has to mention that it was from the beginning tapped into money from Silicon Valley. EA is politically dangerous because it obscures the structural political roots of global misery and contributes to the reproduction of suffering. It just places welfarist bandaids on the damaging status quo, and in effect, it just cements that status quo
Alan Ware (00:44):
That was moral and social philosopher Alice Crary. In this episode of OVERSHOOT, we'll explore her radical work in animal ethics and her critique of what she argues are the toxic ideologies of effective altruism and longtermism.
Nandita Bajaj (01:07):
Welcome to OVERSHOOT where we tackle today's interlocking social and ecological crises driven by humanity's excessive population and consumption. On this podcast, we explore needed narrative, behavioral, and system shifts for recreating human life in balance with all life on Earth. I'm Nandita Bajaj, co-host of the podcast and executive director of Population Balance.
Alan Ware (01:32):
I'm Alan Ware, co-host of the podcast and researcher with Population Balance. With expert guests covering a range of topics, we examine the forces underlying overshoot - the patriarchal pronatalism that fuels overpopulation, the growth-obsessed economic systems that drive consumerism and social injustice, and the dominant worldview of human supremacy that subjugates animals and nature. Our vision of shrinking toward abundance inspires us to seek pathways of transformation that go beyond technological fixes toward a new humanity that honors our interconnectedness with all of life. And now on to today's guest.
(02:11):
Alice Crary is an American moral and social philosopher at the New School for Social Research and visiting fellow in philosophy Regents Park College Oxford. Her work ranges across normative theory including feminism, social and political philosophy, aesthetics, meta-philosophy, and their different intersections. She has long published on and taught animal ethics and environmental ethics. One of her active research projects addresses philosophical limitations and serious worldly harms of the traditions of effective altruism and long-termism. She's co-editor of a collection of essays titled The Good It Promises, The Harm It Does: Critical Essays on Effective Altruism and co-author with Lori Gruen of Animal Crisis: A New Critical Theory which argues for radical re-imagining of our relationship with animals. And now on to today's interview.
Nandita Bajaj (03:08):
Hi and welcome to OVERSHOOT Alice. We are delighted to have you on the show.
Alice Crary (03:13):
Thank you so much. I'm really delighted to be here.
Nandita Bajaj (03:16):
And Alice, we so deeply admire your wide-ranging philosophical work that spans across many of our areas of interest as well - feminism, animal ethics, and environmental ethics. And we appreciate your focus on neglected perspectives of disempowered groups within broader exploitative political and social systems, and especially how you call us to engage our emotions and moral imaginations, to see the world from another being's perspective. We also applaud and share your critique of the toxic ideologies of effective altruism and long-termism and the larger systems of extraction and exploitation that creates suffering for humans and nonhumans alike. And that's something we've been so interested in for a number of years, especially as the movement is becoming louder and more powerful with a lot of billionaire money.
(04:16):
So we're delighted to be going deeper into that. In fact, we can start our conversation with that. So you've written several essays and you've also co-edited a book critical of the effect of altruism movement. We've had a few guests on the podcast like John Sanbonmatsu and Emile Torres who have touched on many of the failings of effective altruism, also known as EA. But we'd like the opportunity to go deeper into the critique of this ideology with you. Could you describe the history of EA, where it came from intellectually, and how it has developed over time?
Alice Crary (04:59):
I love it that you're starting with this question. I think it turns out to be a more complicated question and in some respects a more interesting question than it may appear. So a standard formulation is that it's a program for rationalizing charitable giving or this is supposed to mean something like positioning individuals to do the most good per expenditure of time or money. But what is interesting about this formulation is that it uses effective altruists' own categories. They give themselves credit for rationalizing charitable giving or making it reason and evidence-based. But the notions of reason and evidence that they're using aren't commonsensical. They're appealing to things like randomized controlled trials and recommending minimally costly programs they think can be shown with metrics like those of welfare economics to have maximum benefits. And anyway, the point is it's a specialized project and it's not that EA emerges from an ordinary notions of reason and evidence.
(06:02):
There's absolutely no reason to hold that a reason-based approach to giving suggests strategies just like EA's. It could suggest very different strategies. That's a big topic. I just wanted to put a bookmark there before starting. So here's a kind of standard story about the beginning of effective altruism. It's coined by two at the time, very young Oxford philosophers, Toby Ord and William McCaskill in 2011. This was several years after the founding of a couple of organizations that are retrospectively described as devoted to EA. One of them, Giving What We Can, is actually founded by McCaskill and Ord themselves. These two philosophers are often described as having been impressed by a very famous 1972 argument of Peter Singer's about the practical implications of applying utilitarian principles to the suffering of the world's poor. Singer himself in 2009 in a book called The Life You Can Save actually proposes turning his argument into a charitable movement.
(07:11):
So in effect, effective altruism was their slogan for rationalizing charitable giving Singer-style and its original focus was extreme poverty in the global south and the terrible suffering of animals on factory farms. So one thing about the standard story is that it tells us EA started some years before it was named, and many of the ideas that animate EA are actually significantly older than it's name. And I don't just mean as old as Singer's work in the 1970s about utilitarianism and global poverty, but also that you could find some parallels in the principles of early 20th century philanthropic mega-foundations. I don't want to pursue that, but it's an aside to motivate what I think is most interesting about your question, which is why EA took off when it did because it took off in a really phenomenal way. It was initially small and Oxford-based, but it soon had offshoots of various kinds elsewhere in the UK, in the US. They're all over the place and they're in a lot of other countries.
(08:19):
Many of them today are multimillion dollar outfits. The amount of money being directed by them as a whole is in the hundreds of millions of dollars annually if not higher. And this whole conglomerate has recorded pledges in the tens of billions of dollars now McCaskill and order charismatic. And they were well placed at Oxford, but that is not enough to explain this movement's meteoric rise and the story of its rise has to mention that it was from the beginning tapped into money from Silicon Valley. There's another wrinkle I would add to that story today. EA has, as you know, I know you've talked about this on the podcast, it's morphed into what's called long-termism, which is often described, again, this is a sort of standard story that needs to be interrogated, a future-oriented form of EA. And roughly speaking, it's the view that humankind is at a crossroads at which it can either self-destruct or transcend the human condition shaking off things like illness and mortality and realize a glorious techno-utopia.
(09:25):
And moreover, what we're supposed to do is prioritize responding to threats to the realization of this techno-utopia. It was in 2017 that McCaskill came up with the moniker long-termism. Ord defends this EA variant himself in a high profile book in 2020. It's called the Precipice. McCaskill writes a bestselling book about it in 2022 called What We Owe the Future. And these long-termers think the biggest threat to humanity's realization of its supposed techno-utopia is value misaligned artificial intelligence. And this is obviously a topic that gets a lot of attention in AI circles. So you could have an image of the emergence of EA that goes like this - EA emerged and then it later grew into something that was of interest to people in AI circles. And then you can explain its funding and virtuosity and meteoric rise in that way. But I actually think that that way of describing the development of EA obscures a lot of the more interesting history.
(10:30):
I think it's more accurate to trace the origins of EA back to conversations among people in the tech world who are already interested in long termist questions about how to make advanced AI safe. And I first encountered this point in the work of a researcher named Molly Gleiberman. The point is is that the tie between EA and long-termism is something like the reverse of what it's standardly taken to be. Long-termism had been developed but not named first. And because the core ideas of long-termism came directly out of Silicon Valley, the tie between EA and the wealth of the tech industry turns out not to be a mere accident. So you can explain its explosive success and I know that was a complicated story, but I think you need almost that much to explain its emergence.
Nandita Bajaj (11:21):
It's great that you mentioned Peter Singer because my personal introduction to EA was through Peter Singer just as I was becoming interested in animal rights and one of the books I read was The Most Good You Can Do. A lot of the examples he was giving in the book were about rationalizing certain behaviors in order to achieve what he would say is the right action. And I guess that's what consequentialism is, that the only right action is the one that has the right consequences. So even then as I was reading it, because I've always been interested in ethics, I couldn't understand some of the examples he was giving of like you could work for a nuclear facility and make a lot of money even if you don't agree with that kind of career, but then you could give all that money away to the causes that you most care about.
(12:22):
And there were several other examples like that, and with Emile Torres we've discussed the Earn to Give Sam Bankman Fried debacle, but even then, even to kind of a beginner reader, it just caused me so much unrest to read something like this because it was asking you literally to compromise your own integrity, your own values so that you could somehow do the most good in the eyes of some kind of a universal God. And I'm eager to dive more into that with you as we unpack some more of these, but thank you for providing that background.
Alice Crary (13:06):
The charge of integrity is one that effective altruists think they can absolutely deal with and they think it's a form of self-indulgence to care too much about your own character so that you should care more about doing good. And so I personally think the integrity worry is important, but it's important to place in a way so that it's clear that it's a worry which is internal to what it is to do good, not somehow just an aesthetic attachment to the glory of one's own character, something like that.
Alan Ware (13:38):
We've read your article Against Effective Altruism and you outline three main critiques of it, the institutional, the philosophical, and the combined critique. Could you give us an overview of those three main critiques?
Alice Crary (13:51):
So this threefold taxonomy I used really to make a single criticism of effective altruism while also explaining how the depth of the critique could be missed. So that's why there are multiple pieces. The heart of it is really the first one, the institutional critique. This critique was developed within a couple of years after the 2012, 2011 birth of effective altruism. There was a forum in the Boston Review in 2015 and there were some really prominent political theorists and economists who were laying out what then became known as the institutional critique. And what it does is attacks EA for operating with a damagingly narrow account of the class of things assessable as right or wrong, roughly. It's something like that. And the target of the critique is the tendency to focus on single actions or simple actions and their consequences like social interventions that reduce suffering.
(14:54):
People who like the institutional critique are calling out what they see as the neglect of the kinds of coordinated sets of actions that are directed at changing social structures like regressive economic arrangements or biased legal institutions, social structures that reliably and repeatedly cause suffering. We might want to change those. And so they were suggesting that the neglect of the assessment of coordinated actions and at social transformation was a kind of measurability bias, just measuring what you have the tools to measure. And I think this is a really interesting and important criticism. It's supposed to be politically dangerous because it obscures the structural political roots of global misery and contributes to the reproduction of suffering by bypassing and weakening political mechanisms that could actually be engaged for positive social change. That's the basic idea and I think it's sound, but effective altruists think they can avoid this critique.
(15:57):
I think you mentioned this already, that the moral theory is basically consequentialism on which moral rightness is a matter of the production of the best consequences and what's best is what it's quantitative, what has the most value. They still mostly accept versions of consequentialism that identify value with wellbeing and would count as forms of what is called utilitarianism. They're making a methodological assumption which is in fact morally freighted. And that is that the social circumstances and in particular wellbeing that they want to talk about is discernible from an abstract point of view. They often use a little bit of jargon, the point of view of the universe, and you can see why that matters to them because then suddenly you can use wellbeing as a measure for comparing moral outcomes anywhere. You can go across space to the global poor, you can go across species to nonhuman animals.
(16:55):
And when you get to long-termism, you can go across time to the far distant future. This is a really limited and distorting way to think about what society is. We need categories that are more nuanced and engaged and that presuppose historical understanding and not a merely abstract view. If you're going to understand social relations and social life, EA is built on the idea of this abstract view that allows us to do quantitative stuff - so it can't lay claim to the kinds of tools it would need to assess efforts to change the normative structure of society. It has forfeited engaged and historically informed perspectives necessary for such assessment. But the point isn't that everything that effective altruists want to do is wrong. Sometimes it is important to address suffering and all kinds of emergencies where suffering of humans or animals is acute, but EA doesn't have the resources to help us think about when we should be addressing suffering and when we should be campaigning to remake the world so we don't just keep on reproducing the same forms of suffering year after year, generation after generation. It just places welfarist Band-Aids on the damaging status quo and in effect, it just cements that status quo.
Nandita Bajaj (18:18):
And not surprisingly effective altruism is very popular among Silicon Valley tech bros and billionaire philanthropists as you've already alluded to, who while maintaining the business's usual power imbalances as you covered in your institutional critique, can feel good about the purported effectiveness of their actions. And you've captured in your writing that there's now this tight network of essentially incestuous EA organizations. Are you able to just name a few prominent organizations and people within this network so we kind of know what we're dealing with. These are names we hear a lot and people don't always affiliate those with EA. I think it would be helpful just to understand the pervasiveness of this ideology.
Alice Crary (19:10):
And I can talk about the animal issue in particular. So the main organization that we're talking about in the domain of animal protectionism is Animal Charity Evaluators, which as the name suggests is not as it were doing social interventions on behalf of animals itself, but is actually rating organizations that do this work. You could also mention Faunalytics in this context, but probably the biggest power player is an organization that works both in the domain of animal advocacy and also in the domain of global poverty, and that also at the moment is doing a lot of longtermist grant rating and grant giving and that is Open Philanthropy working in both of these areas.
Nandita Bajaj (20:08):
Also, what I've noticed just being part of the animal rights community for a number of years is because of the funding coming out of Open Philanthropy and then the evaluation using these EA metrics by Animal Charity Evaluators, there is this kind of circular reinforcement that's going on between who is considered to be worthy of some of these billionaire funding. And there has been this creep of welfarism within the animal rights community because of all of this money. And I wonder if you could speak to the increasing influence of effective altruism within animal rights advocacy and how it's leading to the devaluation of radical animal liberation efforts?
Alice Crary (20:59):
Yeah, it's a great topic for me because one of the things that got me into writing about effective altruism in the first place was animal advocacy. So this was early 2020 just before the pandemic hit. I was already interested in effective altruism. I hadn't published on it, but I had already developed some version of the critique I was just laying out for you. And I was at a conference that brought together academics and activists, intellectuals who were thinking about the treatment of animals. A number of people at this conference in 2020 were struck by how often we were hearing leaders of pro-animal organizations say that they had been asked to demonstrate the effectiveness of their work in ways that they thought were distorting the nature of what they were doing. That was part of what motivated me to argue vis-a-vis EA- influenced animal activism that EA does real world harm.
(22:01):
And just to make it really concrete, here's a description of a study I did in 2020, and I was looking at figures of Animal Charity Evaluators from 2019 and I was looking at the nine pro-animal organizations that received either their highest or second highest ratings. You'd go to their website and they'd be telling you, these are the organizations you should be giving money to, and at least eight focused on farmed animals. And of these eight, six are primarily concerned with welfarist improvements within industrial animal agriculture, with the other two doing a little bit more work on structural transformation. But one of the things that was interesting to me at the time was that Animal Charity Evaluator's website even explained that it had more confidence and assessments about the impact of welfarist interventions than in those that focused on systems change. And yeah, that's just what one should expect in looking at EA. So you can see how this desire to get funded could lead an organization to change a bit what it's doing. Recently, a number of effective altruists have started to look at animals in the so-called wild and look at the intersection of environmental and animal ethics. And I think the strategies they use even in those contexts are just as problematic. And some of them, like McCaskill in particular, have such outrageous views which are the level of suffering for wild nature can be so high that they think it might be better not to have it.
Nandita Bajaj (23:36):
Yeah, the fact that in his 2022 book, William McCaskill suggested that our destruction of the environment might actually be a net positive based on the assumption that all sentient creatures suffer and by simply obliterating them and the biosphere, we reduce the total amount of suffering. I think this point doesn't get enough attention that this is a view that is quite prevalent within the EA community of reducing suffering could actually mean just getting rid of the entire biosphere.
Alice Crary (24:09):
In the context of long-termism, as one of your other recent guests Emile Torres likes to say, is an extinctionist program, which is hard to understand at first glance, but they're imagining a future in which we transcend the human condition they're in. There are in effect no more humans, and we're not going to be worried about the biosphere because what we're trying to do is move ourselves out into the galaxies and not be around for the heat death of the sun.
Alan Ware (24:44):
And that disturbing element of long-termism, you've definitely gone into in your article, The Toxic Ideology of Longtermism where you explain how and when it came about and why it carries such dangerous implications. Could you share some of that overview of the history and elaborate more on the dangerousness?
Alice Crary (25:02):
Absolutely. And I should say that that article isn't terribly old, but it's a few years old, and my research on the movement has taken me beyond it. And it now seems clear to me, I was saying this earlier, that longtermism has roots in discussions in Silicon Valley that are decades older than its official start. And I think talking about these origins is really helpful for shedding light on the harms of longtermism. So longtermism started in 2017. That's the official start when McCaskill's coming up with a moniker for it. And it gets treated, like I said, as a future-oriented EA offshoot as if that's the order, start with the EA and then you head towards the future. But the view he was naming had been debated and defended for some time at an Oxford Institute called The Future of Humanity Institute. It's a whole other story about the fact that it's been shut down in 2024, and I wasn't going to go into that, but it was known as FHI and Swedish philosopher Nick Bostrom founded FHI in 2005.
(26:10):
And he's actually the person who came up with longtermism, just not the name. Toby Ord was one of the two Oxford philosophers who came up with the name effective altruism in 2011. But he had been working at FHI where longtermism was being developed since 2006. And again, we're talking about this view that humankind is at a stage of technological development in which we could annihilate ourselves or go on to a supposedly radiant techno-future as post-humans and that we should prioritize confronting threats that cut us off from that future. So it's actually part of the meaning of longtermism, as Ord understands it, McCaskill understands it and others, that a future that qualifies as suitably radiant has to be one in which immense numbers of these technologically facilitated post-humans live on for billions or trillions of years by colonizing other star systems. And that's what makes it an extinctionist view.
(27:15):
There won't be any more humans as we know them. In any case, longtermists focus on what they think the biggest threats are to their utopia. And they overwhelmingly agree that among the biggest threats is AI and specifically machines with above human level intelligence whose values aren't aligned with ours, whatever that means. But this view was circulating in Silicon Valley before McCaskill coined the term long-termism. So the first formulation of the view is in a set of papers that Nick Bostrom of FHI wrote between about 2002 and 2005 when he was a young scholar. He was thinking, oh, technology's going to destroy so many traditional areas of inquiry. And he was looking for the right sort of intellectual community for himself. And he found what for him was a congenial community on an email list managed by futurists who called themselves extropians. And these were advocates of a modern strain of the tradition of transhumanism.
(28:22):
And their email list introduced Bostrom to transhumanist ideas. This was basically people in tech circles on this email list. So he's talking about modern transhumanism with people to a large extent towards Silicon Valley, who think that genetic engineering, AI, and possibly molecular nanotechnology are going to allow us to enhance ourselves and transcend the human condition. And that's a radical eugenic project, not just imagining we can make better humans, the eugenic dream, but transcend the human condition all together. And Bostrom went all in for transhumanism. He co-created the World Transhumanist Association in 1998. He started talking about transhumanist ideas using the tools and concepts of analytic philosophy, and he introduced the category of existential risk, which is now widely used in AI and policy settings around the world - at the UN, in the US government and the UK government, and many other national governments. The term seems really legible, but it encodes a whole worldview.
(29:32):
For Bostrom existential risks are these dangers that could permanently destroy the potential of humankind to develop into the sort of post-humanity that transhumanists envision. That's not the kind of thing most of us mean when we talk about an existential risk. And at least I wouldn't think you'd say, oh, it's an existential risk, something that's going to endanger all human existence, but transhumanists are pro-extinctionist. So when they talk about existential risk, we have to pay attention and know that it's something different. In 2003, Bostrom published an article in which he presents a utilitarian case for holding that the expansion of the human population through space colonization is such a great good that reducing risks to its attainment have got to be our top moral priority. By 2014, he's definitely on message that the main existential risk for him is super-intelligent machines. So he writes a book called Superintelligence, which became a bestseller, and at that point he's also an establishment figure and he's made Silicon Valley transhumanism, which is somewhat academically marginal, he makes it respectable.
(30:38):
And so I do think if we're telling the story about the emergence of longtermism, it's important that when Ord and McCaskill write their books about it, they don't mention transhumanism, not even once. So it's easy to miss how longtermism emerged, because in some sense its origins are hidden in the way that its main exponents are talking about it. But in any case, you get the establishment of longtermism, you get the kind of incestuous relationship between funders and institutes that are developing views, which really reinforce the worldview of the people funding them. And I was thinking about doing this podcast today and thinking I would need an entire podcast to talk about those connections, which longtermist institutes are funded by which groups of people in the AI world, and it just goes on and on. It's a story all unto itself.
(31:47):
But I was going to try and say something about why longtermism through this lens of its connection to Silicon Valley, why it's so toxic. And I think people can be puzzled partly because longtermism like effective altruism is really well-branded. What's wrong with thinking about the long-term? In fact, nothing is, and its characteristic of social justice movements to be concerned with creating more just conditions for future human generations. But that's not what long-termism is. It's stressing concern for, I guess we should say "wellbeing" of the perspective trillions of post-humans who will live on into an imagined distant techno-future. And longtermists tend to be total utilitarians who think that a world with more wellbeing is a better world, and so follows for them that the huge numbers of imagined prospective post-humans that they're talking about, they outweigh all other moral issues. The chief threat to their utopia, they take to be misaligned or rogue machines. And so you wind up with a view on which there are no limits or almost no limits to the resources we should devote to building good robots and avoiding rogue robots.
Alan Ware (33:12):
These are technologists, computer scientists, engineers that definitely assume that technology is all powerful, either all evil or all good either way. As with AI, two years ago, they were talking about how is an existential threat, which may have been partly to hype it for the market, oh, it's so powerful, but only we can control it. And now it's more we need to be the good guys winning the race against Chinese AI. So they've pivoted in that way, but it's still ultimately a technologist view of reality. We don't have material or energy constraints, which fits with their extropian view that the law of entropy or the law of disorderness in the universe doesn't exist, that through our brainiac engineering minds, we can reengineer everything just according to information alone, which is just such an ecological, biophysical blindness.
Alice Crary (34:07):
And at the same time, we're supposed to downplay the importance of actual problems like structural, racist, gender ableist bias since for them these problems are non- existential. And again, they are non-existential in their sense. It doesn't matter that they cause great suffering and mortality. And so you get some really shocking outcomes in longtermism. One shocking outcome is that they're dismissing harms and wrongs caused by the failure to properly regulate the building and running of AI systems. But even more shocking is that they'll invite us to regard as existential threats the kinds of social justice movements that call for limiting growth in the name of sustainable and equitable forms of life. And the problem for them is that these movements threaten growth-fueled techno-utopias that depend on increasing energy use and so forth to maximize the cosmic spread of digital intelligences. So you have things like Peter Thiel in the New York Times suggesting that Greta Thunberg is the Antichrist and it's not a joke.
Nandita Bajaj (35:22):
Yeah. I'd like to pick up on what you said about this obsession within longtermism with population growth. Even though they are looking at a different type of population that'll exist in the techno-utopian future that doesn't stop them from pursuing population growth ideals of the current human form today. And I want to point out this very interesting connection that seems to have gone missing from within the social justice community, from liberal media outlets that are reporting on these issues, is a couple of years ago there's this academic Dean Spears at the University of Texas at Austin who received $10 million from Elon Musk to start a institute called a Population Wellbeing Initiative, PWI. It sounds very neutral and good, something about wellbeing, but given that it's funded by Elon Musk who thinks the biggest risk to humanity or civilization is declining fertility rates, not climate change, not biodiversity loss, not species extinction, but declining fertility rates.
(36:40):
Dean Spears has since appeared in New York Times, Time Magazine, NPR, with no critical analysis of where the funding came from to promote his view, which today he's become the biggest spokesperson in academia on depopulation panic. And his latest book called After the Spike, it's all just very selective arguments about why population decline is one of the greatest threats that we face, even amid the fact that we are still growing. We're still projected to add 2 billion more people, which is the largest population that has ever existed at any given time in human history. But the very interesting thing to bring it full circle is one of the first and most popular endorsers of Dean Spear's book is Peter Singer, the same Peter Singer who apparently cares a lot about animals and there has been no greater project that has led to the annihilation of other life than the human domination project. So I just find all of this to be extremely disturbing and curious of just where this project is going.
Alice Crary (38:02):
I don't know that I pay attention to Dean Spears. I look whenever I see an article on pronatalism or depopulation fears, I look to see how it's funded. And I mean, I think the connections are straight to transhumanism and longtermism. And Elon Musk promotes the work of Nick Bostrom and Bill McCaskill and his professional ventures are in a sense lined up so that they're an expression of transhumanist commitments. He's working on computer-brain interfaces and in one venture he's working on off- worlding and SpaceX. And so he is a walking talking transhumanist whether he talks about it in those terms or not. And he has been actively enthusiastic about the writings of the longtermists. But I mean, I suspect Peter Singer's endorsement of Dean Spears, it's the enthusiasm for someone who is thinking along the same lines as he is - sort of generic, forward-looking utilitarians. And he'll say things like, but it's a good thing to have people thinking about the future. Of course, we're all going to nod. It's great to have people thinking about the future. I just don't want it to be people thinking about the post-human techno off-Earth future. I'd rather think about here.
Nandita Bajaj (39:27):
Yes, us too. And I think off of that comment, you've noted that many researchers pursuing God-like artificial general intelligence are also longtermists, united in their belief that we must keep pushing the frontiers of technology for the sake of a better future, what you just said, the techno-utopian off-Earth future. How do you see longtermism and AI believers connected, and why as you argue, should we be outraged by AI?
Alice Crary (39:59):
Yeah, well, I think you've both said things that are really relevant to understanding why I think so. It is at this nexus of longtermism and AI that accounts for my willingness to say things like, if you're paying attention to the AI industry, you should be outraged. And part of it is, I wouldn't describe many people in Silicon Valley quite as longtermists, but more as transhumanists, even though the traditions have sort of run into each other. So you do have people like Eliezer Yudkowsky who's getting a lot of attention right now and bestselling author and engineer Ray Kurzweil who are out as transhumanists. That's a label they've used or do use. But you have people like Peter Thiel, Sam Altman, and Musk whom we were talking about who may not use the label, but they have lots of commitments and ties to transhumanist organizations. And so you see transhumanism sort of saturating or providing the context for a lot of discussion within the EA industry today.
(40:59):
And so you can hear transhumanist or longtermist themes among the players who are rushing to dominate the market, what's sometimes called the AI arms race, where you hear them saying, oh, well the big issue is to figure out whether a AGI, artificial general intelligence, which is supposed to be the technology that will bring us to our techno utopia, it's what we need. But it could also be the biggest existential risk. And so you get conversations that go on between people in AI who are called doomers because they think there is an existential risk and people who are called accelerationist usually because they are sort of unqualified in their positivity about building AGI, and you wind up with most people in the industry saying, well, there's at least some risk. So we would need to worry about the so-called alignment problem, that's an interest in what's called AI safety.
(42:04):
And they wind up in the paradoxical position of raising alarm on the technology they're frantically trying to build while hauling immense sums to they say keep it from ending humankind. So it is quite a performance in a sense. But what happens then is you have this talk of AI safety, and that's a field which is concerned with existential risk, the risk of a theoretical rogue AGI, artificial general intelligence. And they do not talk very much about what's called AI ethics, which is about so many different things. It's about things like lack of data privacy, algorithmic biases, deep fakes, fake news generated hate speech, but things also like the exploitation of workers doing data annotation, extractivist injuries and environmentally devastating water, mineral, and energy use. And there's no reason to think that you could make AI systems safe in the ordinary sense without dealing in ethical issues.
(43:08):
They're also arguing that racial justice and related issues aren't as existentially serious as the idea of these machines getting more intelligent than us and taking over. And the sidelining of ethics is a shared thing between AI doomers and AI accelerationists. And so you have this speculative project of a really small cadre of immensely wealthy men who are based in the global north, and for them the issues they're sidelining may not be existentially threatening. They aren't, but they're sidelining all of these issues that matter to so many of us. And that's the stance that longtermist philosophers are giving their moral blessing. And we can see how terrible it is if we try to bring into focus how big the environmental harms are and how serious some of the social justice concerns are in the current race to build a AGI. So that's how it's coming together with longtermism, because longtermism is the sort of ideological fabric that allows them to say, no, the real issue is this AI safety, it's more important. It's not going to matter all these supposedly little issues if we're all dead because the super-intelligent machine kills us off.
Nandita Bajaj (44:39):
Your article is appropriately titled that we should be outraged. But I think we're also outraged by how many people are not outraged by AI and how pervasive it's becoming. And there's all these new movements that are emerging like how can we use AI for good, AI for social justice, AI that is gender equal. And it's taken for granted, it's coming, let's use it to our advantage. Let's leverage all of its positive aspects. And along those lines, there's also this kind of emergent movement, not surprisingly, within the same EA animal advocates especially that are now arguing for extending rights to AI or AGI on the basis of its potential sentience. We know what we think about this, but we want to hear your thoughts about this move.
Alice Crary (45:37):
Yeah, okay. So it's like they're moral thinkers and they're saying, oh, it's going to be really important to include artificial systems that become intelligent or have sentience, that this is where the action is. They're new creatures. We have to expand the moral circle and bring them in because they're sentient too. At least when a lot of researchers talk about what they mean, they're imagining something that's agential, or weirdly, they've come up with their new word for having agency, which is agentic. So we talk about a AGI being agentic, and it's really not clear when you get just a little bit into the technology of large language models, which are the kinds of machines that we're talking about, it's not clear why we should think they're about to spawn agency. Something endowed with agency, it's not just going to be controlled by its impulses. It needs to be able to step back from its impulse to think or believe or act on something and to ask itself whether it should believe or do some particular thing.
(46:46):
And the fact that large language models converse in ways that give the impression of intentional speech makes people think maybe that's what they're dealing with, but they're actually machines that are managing this through sensitivity to the relative frequency of expressions in their databases. And so it's just not clear, I think, and here I'm following the lead of a lot of really sophisticated tech commentators, journalists, and scientists. It's not clear why we should think that we're on the verge of getting agents here, but it is clear that this story about the rush to build a AGI, which is if it's safe, it's our route to this utopian techno-future is an incredibly important part of the story that big AI is telling about why we should sanction what they're doing and why we should disregard the environmental harms and the serious social injustices. And so you get this new literature, which I'm intentionally not mentioning any of the authors, although I've read quite a bit because I find it at an intellectual level almost laughable. But I think really what it's doing is just adding to this diversionary tactic. It's simply playing along with this ideological account of what's happening in our world and distracting us from what's really going on. Yeah, it's an extended form of EA and it's problematic like that. If we want to understand why these books and articles are getting so much attention, I think we need to just look a little bit deeper and we can understand that longtermism, like various forms of transhumanism, they're just distracting us from what's really happening.
Nandita Bajaj (48:32):
Yeah, totally agree with you. And I think because of these projects being so heavily funded and being in our faces all the time, I think in some ways it makes sense why so many animal advocates are unintentionally just starting to tap into nonsense like AI for Animals or Sentient Futures. I just don't think there's enough systems analysis, historical analysis of how these systems have emerged and gone on to become further and further more oppressive by a lot of well-intentioned animal advocates, which is mainly the place where I'm seeing a lot of this emerging.
Alice Crary (49:14):
Yeah, I mean, I do advise a lot of younger scholars including animal advocates who want to have a training in moral philosophy. And it's not just that we're seeing real funding for these other projects that are talking about sentience and AI in connection with welfarist approaches to animal ethics. But it's also true that the younger scholars, who really think these are damaging ways to work and want to work in ways that focus on the intersection of social injustice and harms to animals, have real trouble getting published. Some really great books I know of that do finally come out have come out after 20 rejections. It is harder to get them out.
Alan Ware (50:03):
So as we've discussed, effective altruism relies heavily on utilitarian reasoning, and there are many animal rights arguments that rely on rights-based arguments. And you have a new book with Lori Gruen titled Animal Crisis: A New Critical Theory that offers approaches for more radical, necessary structural transformations to dismantle human supremacist systems of oppression. Can you explain why some of those rights-based approaches in addition to utilitarianism, can also be inadequate for advancing animal justice and highlight some of the main arguments in that book?
Alice Crary (50:37):
It's really nice that you asked me about this book that I wrote with Lori Gruen because it's a project that's very important to me. What we're doing there is the kind of thing I was just describing that some of the younger activist scholars I work with are trying to do. We're trying to show that there's systematic ties between the devastation and killing of animals and the oppression of marginalized human groups. And it's written to really be concrete and not treat animals as abstraction. So it's always with the relationship between animals and human beings, a particular animal kind, and in a particular context. It's the way each of the chapters of the book are written. But throughout the book, we weave a narrative which reveals historical, structural, and conceptual relationships between the devastation of animals and the oppression of marginalized human groups. And historically, you're talking about things like historians showing the relationship between European practices of treating animals as mere resources emerging alongside horrific injustices to human beings like the Atlantic slave trade and the establishment of colonies.
(51:53):
And then we were looking, weaving in the work of social theorists who are talking about social structures that explain why in fledgling, but also advanced, socioeconomic arrangements, the kind we call capitalist with their emphasis on production for profit, the devastation of animals and nature and the degradation of women and others involved in social reproduction are yoked together. So you get that non-accidental tie explained. And then I think also really importantly, we're talking about conceptual ties - that animal and human forms of debasement are connected through the use of descriptions of subjugated people as less than human, as animal-like as tools of debasement. So marginalized people are often compared invidiously to pigs, to dogs, to chimpanzees. And this kind of animalization is internal to the very categories used to pick out favored groups, internal to the way we think about things like race and gender and ability and disability too.
(53:05):
So that's the book. And it's true that in the book we're criticizing this sort of utilitarian-based or welfarist approaches to animal protectionism that one finds in EA, although we don't talk about EA in the book. But welfarist strategies are problematic for the reasons we've talked about in aiming to lessen harms to animals, they're failing to challenge structures of human animal domination that are primarily responsible for those harms, and they even run the risk then of validating and strengthening the mechanisms. You asked about rights, and we're not in the same way attacking all rights-based approaches to animal ethics, partly because there's a variety of approaches and some could be aligned with what we're doing. So some are problematic because they're not contesting society-wide practices in which human- animal normative ranking is embedded. So they run the same danger as the welfarist approaches. And we do talk in this connection about existing rights-based strategies in animal ethics, most of which focus on negative rights.
(54:14):
So you're treating animals as having the right maybe not to be displaced, killed, harmed, or the right to be left alone - so negative rights. And by themselves the idea is if you start trying to think about what it would be to grant these rights, you're allowing that damaging practices and institutions stay in place and you're, as it were, just trying to create spheres of inviolability within them. That's the way we put it. It's not in itself a bad thing, but it's not addressing the structures that are causing harm again and again. And there's really rich literature for those who are working on animal ethics, which is about 10 years old now, which is sometimes referred to as the "political turn" in animal ethics. And you have in that context, scholars talking about the need to talk about positive rights for animals, the rights to be provided with life- promoting goods and protections and so forth.
(55:14):
And these kinds of rights can be talked about in ways that can fail to challenge human-animal moral hierarchies, but they could be introduced in ways that are supposed to directly stand up to human supremacism. So they might provide a way forward in ending human domination of nonhuman life. It's just going to depend on how you describe them. So what we're talking about is, to summarize, it's not an attack on rights-based approaches per se, but only those that aren't capable of critically examining how contemporary liberal or illiberal democracies are intertwined with global extractive capitalism with its damaging structures.
Alan Ware (56:00):
Right? Yeah. It seems like rights are only as good as typically the nation state that's codified them and enforced them. And if those rights are codified in capitalist property law and different types of law, it doesn't mean as much. So it's interesting, we had Dinesh Wadiwel on talking about the "political turn" and how ultimately courts where these rights are decided are inherently quite conservative going by precedent, going back centuries, that we need more of social movement is what Dinesh was talking about, a political movement, bottom-up. And part of that would be like what your book is talking about, the marginalized humans in this system, the exploitative, commodifying, oppressive systems that are used against animals and people. And both of those can work together. And I like it how each chapter is focused, like you said, on a specific animal - orangutan, pigs, cows, rats, parrots, and ticks, each reflecting a different issue.
(57:01):
And that's appreciating the beingness, the sensory experience of each animal. And I think you had mentioned that literature and the humanities are really important to you, and that has been shown to increase perspective-taking, reading of literature of fiction. So that helps us get inside an animal's perspective too, and I appreciated that. And a lot of these EA and technologists, computer science engineers, maybe they read certain science fiction. There's a certain kind of technologist fantasization, but I don't know if it's as much about internal perspectives that they're taking on that you take on in this book and that I appreciate it seemed to inform that.
Alice Crary (57:43):
I can't remember who I was reading. I think it might be Adam Becker's book, which is called something like More Everything Forever, where he's like, they all read science fiction, but they often misunderstand it. And so you have Peter Thiel not realizing that Palantir is like Saruman's eye and that's the bad guy. You guys are the bad guys and stuff like that. Or they're thinking Star Trek is really cool, which is space democracy. They don't get what's happening.
Nandita Bajaj (58:18):
Well, Alice, this seems like a really nice place to wrap things up. Thanks so much for the time that you've taken, not just today to meet with us, but in the research that you've dove so deeply into, to explore so many of these very varied topics, and you've kind of beautifully connected them into a very comprehensive explanation for us. Really appreciate your time.
Alan Ware (58:46):
Thank you, Alice.
Alice Crary (58:47):
Thank you both. I'm really glad you're having these conversations and I'm so honored and flattered to be part of them.
Alan Ware (58:53):
That's all for this edition of OVERSHOOT. Visit populationbalance.org to learn more. To share feedback or guest recommendations, write to us using the contact form on our site or by emailing us at podcast@populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you'll consider a one-time or recurring donation.
Nandita Bajaj (59:22):
Until next time, I'm Nandita Bajaj thanking you for your interest in our work and for helping to advance our vision of shrinking toward abundance.

