The Dystopian Delusions of Tech Overlords

Silicon Valley billionaires, such as Elon Musk, Jeff Bezos, and Sam Altman, promise salvation through space colonization, immortality, superintelligent AI, and endless growth. Adam Becker, astrophysicist and author of More Everything Forever, debunks these profoundly immoral and biophysically impossible delusions, and explains why resisting them through collective action is essential. Highlights include:

  • How tech billionaires confuse science fiction for reality and why their fantasies of space colonization are biophysically impossible;

  • Why Artificial General Intelligence (AGI) remains an ill-defined concept that is based in the false assumption that humans' evolved brains work like computing machines;

  • Why large language models (LLMs), the dominant form of AI, are neither creative nor accurate enough to achieve the dreamed-for leap in machine intelligence;

  • What the end of Moore's Law tells us about diminishing returns to technological complexity and the expectation of endless technological growth;

  • Why longtermism is a dangerous ideology of technological salvation and endless growth, prioritizing hypothetical future populations while excusing present-day social injustice and ecological destruction;

  • How the fear of death underlies techno-utopian off-planet and transhumanist fantasies;

  • Why resisting their oligarchic visions requires calling out the ridiculousness of their ideas and organizing collectively to push back both politically and economically.

MENTIONED IN THIS EPISODE:

  • Adam Becker (00:00):

    I love science fiction. I'm a scientist by training. I love science. The problem is not science. The problem is not even science fiction. The problem is that these people are bad at reading science fiction and don't understand science. It's not so much that they take it seriously. They take it literally. They take it as a blueprint for the future when that's never what fiction is. And so I think talking about the future in a different way, I think, is part of the solution, and making it clear the stuff we've been told about AI and the stuff that we've been told about our glorious future in space and the stuff we've been told about technology, it's hot nonsense that's not happening.

    Alan Ware (00:39):

    That was journalist and astrophysicist, Adam Becker. In this episode of OVERSHOOT, we discuss his latest book, More, Everything, Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, about the wildly implausible and profoundly immoral visions that tech billionaires have for the future.

    Nandita Bajaj (01:10):

    Welcome to OVERSHOOT, where we tackle today's interlocking social and ecological crises driven by humanity's excessive population and consumption. On this podcast, we explore needed narrative, behavioral, and system shifts for recreating human life in balance with all life on Earth. I'm Nandita Bajaj, co-host of the podcast and executive director of Population Balance.

    Alan Ware (01:35):

    I'm Alan Ware, co-host of the podcast and researcher with Population Balance. With expert guests covering a range of topics we examine the forces underlying overshoot - the patriarchal pronatalism that fuels overpopulation, the growth- obsessed economic systems that drive consumerism and social injustice, and the dominant worldview of human supremacy that subjugates animals and nature. Our vision of shrinking toward abundance inspires us to seek pathways of transformation that go beyond technological fixes toward a new humanity that honors our interconnectedness with all of life. And now on to today's guest.

    (02:15):

    Adam Becker is a journalist and astrophysicist. He's the author most recently of More, Everything, Forever, which the New York Times called 'smart and wonderfully readable.' He's also the author of What is Real, a book on the sordid, untold history of quantum physics. In addition to his books, Adam has written for the New York Times, the BBC, NPR, Fortune, the Guardian, Scientific American Quanta, and many others. He has a PhD in cosmology from the University of Michigan and he lives in California. And now on to today's interview.

    Nandita Bajaj (02:51):

    Hi Adam, and welcome to the OVERSHOOT Podcast. We're absolutely thrilled to have you.

    Adam Becker (02:57):

    Thanks for having me here. I'm thrilled to be here.

    Nandita Bajaj (02:59):

    And Adam, your latest book, which is the subject of today's interview, is called More, Everything, Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity. Great title. And given your background in astrophysics, we're excited about delving into your science-based critique of what you call 'a jumbled mix of shallow futurism and racist pseudoscience.'

    Adam Becker (03:30):

    Yes.

    Nandita Bajaj (03:32):

    Which is driving so many of these tech billionaires, delusional and dangerous dreams of a techno-utopian future. And in contrast to the Silicon Valley elites arrogant belief and more, everything, forever, here at Population Balance, we believe much like you in the humility of more of some things like flourishing nature, less of others like shallow materialism, for a time, recognizing that we are mortal. So we'd like to begin this conversation with some of the more fantastical visions in Silicon Valley, and then we'd like to go deeper into how their ideas and the immense wealth and power behind them are shaping the choices we make here on Earth. And you describe on your website that your book is an attempt to expose how 'Silicon Valley's heartless, baseless, and foolish obsessions with escaping death, building AI tyrants and creating limitless growth are about oligarchic power, not preparing for the future. This is a lot to consider and you do it so compellingly in your book. What motivated you to write this book?

    Adam Becker (04:45):

    Well, several things In some sense, this book is just my response to living in the San Francisco Bay area for over a decade and being unhappy with certain things that I've seen out here in the culture and seeing those things filter out into the wider world. But also, I'm a physicist by training, and my first book was about quantum physics. And while I was writing that book, a whole bunch of horrible things started happening all around the world. But especially right here in the US we had the 2016 election. In the UK, there was Brexit. There just seemed to be a resurgence of far right, fascist, authoritarian nationalism around the world and democratic backsliding in countries that were previously thought to be stable democracies. And I thought, what am I doing here, sitting here writing about quantum physics while the world is burning? And promised myself that if I got the opportunity to write another book, that it would be something more sort of directly politically relevant.

    (05:45):

    And when I started thinking about what that might mean, I pretty quickly came to the conclusion that something about how Silicon Valley gets science wrong might be interesting and might be something that I was sort of well-placed to write about. And as I started working on that, it sort of morphed from, oh, Silicon Valley doesn't understand science - though they by and large don't - into something broader about Silicon Valley's ideas about the future, why they don't work and where they come from. And I finished this book before the 2024 election here in the US and when I finished the book, thought that I was going to have to do a lot of work convincing people that know the tech billionaires really don't have our best interests at heart, and Elon Musk really is a horrible person and so on and so forth. And then between finishing the book and the publication of the book, the strangest thing happened.

    Nandita Bajaj (06:47):

    Yeah, I think that's what's so refreshing about your book too is so many of us, especially on the progressive left, we kind of see through the delusional nature of these billionaire fantasies, but don't have necessarily the scientific critique that's required to say No, none of it's possible or probable. A lot of people will say, well, Elon Musk is so smart, but he's taken a turn to the right. But having someone like you come in and just say, no, none of this, they like to call it science, but it's not really science, it's just money. And they like to think that money has bought them expertise, which it hasn't.

    Adam Becker (07:34):

    Or they like to think that they have the money as proof of their intelligence. Right. They're smart and you can tell that they're smart because they're wealthy when the fact is that that has more to do with luck than anything else.

    Nandita Bajaj (07:48):

    Yes, exactly. And I think the reason these ideas have spread as pervasively as they have, and you've spoken about this as well, is also because of a lot of gullible journalists who are trading away their ability to criticize these billionaires in order to gain access. And then they're writing about the hype in the way that the billionaires intend for them to write. So we're not really getting good scientific critique from journalism.

    Adam Becker (08:20):

    Yeah, I think that's right. Going back to your first question in a way, after I figured out what the book was, but before I'd sort of really gotten into it, I definitely had a moment or more than a moment where I thought, what if I gotten myself into? This is a book that really should be written by a tech journalist, I'm a science journalist, what am I doing? And then as I got further into it and realized what the book was or how it had to be in order for me to write it, I realized I think as is always the case with anyone's book, it's a book that only I could write, but also a tech journalist by and large can't write this because tech journalists do mostly rely on access. And you write a book like this, you lose that access and then you can't do tech journalism anymore. Which is not to say that all tech journalists are only working based on access, but for a lot of them that's what's going on. And I had no such problem because I never had any access to begin with, which is why there's a list at the end of the book of people who I attempted to interview but who said no.

    Nandita Bajaj (09:23):

    And you do share some things with these delusional tech billionaires, which is your love for science fiction, which also gives you credibility to write about the things they continue to believe in as real, even though they were always meant to be fictional.

    Adam Becker (09:39):

    Yeah, exactly. I mean, I think that there's this strange cultural shift that's happened where back in the eighties, nerds were not seen as a dominant cultural force, whereas now the biggest movies in the world are comic book movies and science fiction movies or science fiction, comic book movies. And also the wealthiest people in the world are people who work in a culturally nerdy industry, the tech industry. But I still think that there's this idea that if you go after people for liking science fiction, then you're punching down in a way and like, no, I'm a science fiction fan. I love science fiction. I'm a scientist by training. I love science. The problem is not science. The problem is not even science fiction. The problem is that these people are bad at reading science fiction and don't understand science. It's not so much that they take it seriously. They take it literally.

    (10:31):

    They take it as a blueprint for the future when that's never what fiction is. And it's certainly never what good fiction is. Anything that was written as a blueprint for the future is going to be trash. It's not going to read well. Great art is never meant as a blueprint. And I as a science fiction fan, of course, believe that a great deal of science fiction is great art, or at least some of it is. People laugh when I say this, but I think Star Trek is great art, or there's a lot of Star Trek. Some Star Trek is great art, obviously something with that many hours of episodes not all of them are going to be amazing, but some of them really are. And it's not meant as a blueprint for the future or in so far as it is, it's not about a glorious future in space. It's about a future where we treat each other better. And it's really about the present. And it's never been subtle about that. As I talk about in the book, one of the arguments against it being great art, and I think it's a misguided argument, but it is true that Star Trek is very direct in its metaphors about the present. Star Trek is not attempting to be subtle when it talks about problems in the world here and now. And that's what almost every single Star Trek episode is about.

    Alan Ware (11:47):

    And meanwhile, as you note, "according to Elon Musk, Jeff Bezos, Sam Altman and more the only good future for humanity, their sci-fi they'd like to make real is one powered by fantastical technology with trillions of humans living in space, functionally immortal, served by super intelligent AIs. And as you note, these are wildly implausible and profoundly immoral visions of tomorrow. And in reality, there's no good evidence that they will or should come to pass." And that's a great summary and we'd love to get into some of the specifics of why those off- planet dreams are delusional. Like you, we also agree that calling out their delusions is a useful social practice. We need some well-deserved ridicule targeted at them. But first we'd like an overview of the basics. Why do Musk, Bezos and space enthusiasts think we need to leave Earth and what are they proposing?

    Adam Becker (12:38):

    Yeah, credit where credit's due for Musk. He's been fairly consistent about this over the last 10, 15 years. I mean, there's all sorts of stuff he's been massively inconsistent about and deadlines he's blown through. But he has been saying for quite some time that we need a backup for humanity on Mars. And that's in case an asteroid hits Earth or nuclear war, something that kills off everyone on Earth. We need a backup for humanity on a second planet. And he thinks Mars is suitable for that. And so to achieve that, he wants to put a million people living in a colony on Mars by 2050. Bezos has a somewhat different vision. He thinks that the problem is that resources on Earth are limited, and so if we just stay on Earth, growth has to end eventually. But he does recognize, credit where credit's due for Bezos, he recognizes that there are no other planets in our solar system where we could live. And so his proposed solution is to put hundreds of thousands of enormous space stations into orbit around the sun at roughly the same distance as the Earth to have a population of. And this is the number he's given, a trillion people living in the solar system. He says that way we could have a thousand Mozarts and a thousand Einsteins, and then Earth would mostly not have people living on it. And it could be like a giant park. And other billionaire and non-billionaire space enthusiasts talk about similar things. These are often the reasons given. There's a third one that's often given, which is, oh, we need to have a frontier for sort of psychological and psychosocial reasons, cultural reasons. It's important for there to be a frontier and space is the final frontier.

    (14:33):

    And there's also stuff that Musk talks about. He talks about we have to preserve the light of consciousness by making human civilization multiplanetary. Or there's this guy whose name I don't remember, I mentioned him in the book. He's a venture capital dude who says, humanity faces a choice. Either we can expand out into the stars forever or we can fade away, and this is the way that they talk about this. It's either growth or death. And if you push them on that, they'll say something like, well, in five or 6 billion years, the sun is going to expand and start to die. And if they know a little bit more, they'll say, actually in less time than that, in about a billion years, the sun is going to be so hot that the oceans will start to vaporize and you'll get a runaway greenhouse effect here on Earth. And Earth will turn into a planet a bit more like Venus, which is a terrible place. And both of those things are true to the best of our scientific knowledge. So you asked me to explain what they believe and why they believe it, and that's the answer to that question. What's the next question?

    Alan Ware (15:45):

    Based on your background as an astrophysicist, why are they so biophysically delusional, the Mars dtream and Bezos dream?

    Adam Becker (15:55):

    Yeah, so let's start with Musk. Basically, the problem is that Mars is terrible. Mars is just an absolutely awful place. So many reasons why Mars is a horrifying place to live and it can't support long-term, large scale human life. I get into a lot of that in the book, but to give a shorter answer than I give there, the radiation levels are too high, the gravity is too low, there's no air, and the dirt is made of poison. There's no biosphere there to support human life. You would have to build an entire self-contained biosphere because Musk, to go back to his plans for a second, he didn't just say he wants a million people living on Mars. He said that the colony has to be self-sufficient in case the rockets stop coming. So you can't just have food shipped in from Earth. It has to all be set up there, which if you want to have a completely independent city on Mars, then yeah, it can't be dependent on rockets from Earth.

    (16:58):

    The problem is that that's not going to happen. Well, one of the problems is that that's not going to happen. It's not going to work for so many reasons. There's all the stuff I just said right? Here on Earth, we have two main things that protect us from radiation in space, our Earth's magnetic field and our thick atmosphere. Mars has neither of those things. Here on Earth we have an atmosphere that we can breathe. Mars doesn't have that. Here on Earth we have a biosphere that grows food. Food literally grows on trees here. Mars doesn't have that. Mars also doesn't have the ecological necessities for growing plants. Musk talks a lot about terraforming Mars making it more like Earth, so you could get a biosphere going there. He suggested doing that, and I swear to God I'm not making this up. He has suggested doing that by detonating nuclear weapons over the Martian polar ice caps in order to melt them.

    (17:49):

    And he says that will create a thick enough atmosphere to support plant life, and then you put plants there and that puts oxygen in the atmosphere to support human life. He is wrong. There's not enough there to support plant life. He was told that multiple times and he just didn't listen. He said, no, you're wrong and didn't cite anything. He just thinks that he knows better than everyone because he has more money. He says that this is a good plan in case an asteroid hits Earth. More asteroids hit Mars than hit Earth. It's closer to the asteroid belt. And even if an asteroid as big as the one that killed off all the dinosaurs except for the birds 66 million years ago, that big famous asteroid impact, the worst single day in the history of complex life on Earth, that day was a better day for being alive than any day on the surface of Mars, probably ever.

    (18:38):

    Certainly in the last couple billion years. We know this because we are here. There were mammals that were on Earth at the time, and those mammals survived that day. Even some of the dinosaurs survived. That's why we have birds. Birds are dinosaurs. There is no mammal or bird or I think any vertebrate. There might be a couple invertebrates, but I am pretty sure no vertebrate, definitely no mammal could survive on the surface of Mars without a spacesuit for more than 10 minutes tops, probably less, quite a bit less. And yet those mammals that survived 66 million years ago, pretty sure they didn't have spacesuits. Pretty sure they didn't have thermal insulation and all that stuff. They could burrow into the ground or go underwater, and that was about it, and that's how a lot of them survived. That might be how all of them survived. There were massive firestorms around the Earth.

    (19:32):

    It's possible that actually the entire Earth's atmosphere was for a few minutes turned up to an oven set it broiling. That's still better. A few minutes of that is still better than being on the surface of Mars. One last thing, a million people's not enough. If you want to have a fully independent civilization that can survive without anything from Earth, you need to have everyone on that planet sort of filling in all of the functions of a high tech economy that we have here on Earth except probably even higher tech to survive the rigors of Mars. Somehow, if you somehow solve all those other problems, the best guess from economists about how many people you need to actually do all the things you have to do to get an economy like that running is somewhere around 500 million to a billion people. Musk is certainly not putting a million people on Mars. He's definitely not putting a billion people on Mars. So yeah, Mars, no way.

    (20:33):

    As for Bezos, those space stations, I mean come on, a hundred thousand space stations, each one with 10 million people on it, each one bigger than the island of Manhattan. Getting that much material into orbit and just to build one of those, that would be a challenge, unlike anything human technology has ever taken on. We don't know how to do it. It's not just a matter of technology. It's also a matter of basic science that has scientific questions that have to be answered that could very well come up. Oh, you can't do that. Even if you could build one of those, which is a pretty big if, what would you have? You would have 10 million primates in a can surrounded by the cold vacuum of space. And all you need for everyone in that can to die is for one or two people to go crazy and open a window, which to be fair, there aren't any windows to open terribly easily, but one crazy enough person could manage to open a window, open a pipe, open vent.

    (21:48):

    It's very, very easy for something to go wrong. Also, you're talking about a wholly artificial, completely contained biosphere that can support 10 million people. Are you serious? And now you want to do that a hundred thousand times to support a trillion people. I think that you would have to do an awful lot of work to show that the carrying capacity of the solar system is that large. Even if you strip mine Earth and every other planet, I am not convinced that you can do that. Yes, the energy from the sun is there, but that is not the only thing you need.

    Alan Ware (22:26):

    You had mentioned in the book that I'm not sure what energy growth rate, but 3,700 years from now, we'd be using the energy of all the stars in the observable universe.

    Adam Becker (22:35):

    Yeah, yeah, that's right. I mean, Bezos says that we need to do this because in a few hundred years if we continue growing our energy usage, we'll be using all the energy that the Earth receives from the sun. That's true. And there's also thermodynamic limits that would cause the temperature on Earth to rise to an unsurvivable point at around the same time. But if he wants to keep that trend going out into the solar system the way he wants to only buys another thousand years, and then if you spot him a warp drive that lets you go faster than the speed of light that only spots you about another 3000 years, I think. And so then you're using all of the energy in the observable universe. So you can do that or you can say, oh, hey, yeah, maybe growth ends. Maybe that's okay. So yeah, there's a lot of problems with Bezos's ideas as well. And yeah, I mean fundamentally a lot of this is about these guys not understanding that growth can end, and that's okay.

    Nandita Bajaj (23:34):

    Right. And then you also talk about another branch of this off-planet futurism, which imagines escaping our sun's eventual death by uploading human minds into machines. And there's people like Ray Kurzweil, head engineer at Google, who argued that this merging of mind and machine is both inevitable and desirable. And critical to this faith is the belief that AI development will achieve something they call artificial general intelligence, AGI. And there seems to be more than a little bit of confusion about what a AGI means, and you suggest that this confusion may be intentionally generated. Can you help us understand what's going on there?

    Adam Becker (24:17):

    Sure. Really quickly, just to say something about the death of the sun, I go into this in great detail in the book. We're not going anywhere. We're not leaving the solar system. That's just not happening full. So that just means got to find a way to deal with the death of the sun, but also that's not going to be humans dealing with that humanity's not going to last that long. Even if we survive, our descendants will have drifted into some other species by that point. Anyway, AGI it is ill-defined. I think the concept of intelligence is ill-defined. I mean, there's so much in Kurzweil's dream and the dream of uploading brains that just doesn't really work. The idea that there is this thing called intelligence that you can sort of quantify, and then the idea that you can get a machine's intelligence turned up to a point where it's smarter than a human or all humans, it's all pretty fuzzy.

    (25:14):

    And yet there's enormous amounts of money being poured into this on the assumption that it'll all pan out even though there's no reason to believe that and lots of reason not to. There's also an unspoken assumption in Kurzweil's dream, which is shared by so many of these tech billionaires, that the human mind is software running on the computer of the human brain. The brain is not a computer. Our experience of the world is not software running on a computer made of meat. I believe we are biological machines. I think that there's very good scientific evidence that that's what we are, but the definition of machine in there is quite a bit broader than just computer or even something designed and built right, because we evolved. And that's one of many, many differences between a computer and a brain. And I think some of the talk around a AGI is intentionally vague, but I think some of it is just unintentionally vague.

    (26:19):

    There's a strategic advantage to having certain terms be vaguely defined. George Orwell talked about this in his amazing essay, Politics in the English Language. He talked about how there are certain words that where the attempted to find them as resisted by all sides because it's politically disadvantageous to have these words have definitions. And these are often words like democracy or fascism. AGI has sort of become one of those things. And the definitions given are generally wildly bad. If you go look at the Open AI charter, they have a definition of AGI that is, and this is almost word for word, I'm not going to quite get it right, but the definition is something like a machine that can reproduce any economically productive activity that humans engage in. And that is vague. That's still vague and weirdly narrow, right? Because the real definition of a AGI is something like that stuff in science fiction, something like Commander Data from Star Trek, that's AGI.

    (27:27):

    That's what they're trying to do. And if that's really what you're aiming at, something that is like a human but in a computer, something that can do what humans do, but in a computer or a robot, then any economically productive activity, there's all sorts of things that humans do that are not economically productive, that are an important part of the human experience and that you can't be a human without doing. If I take a long walk with a friend and we have a long and meaningful conversation about our personal lives or about what the world was like 50 million years ago, these are not economically productive activities. If I sit in my room and read a book that I got from the library, that's not an economically productive activity. If I daydream, if I make art, and these are all important parts of the human experience, right? Hell, if I have the flu, is that pleasant? No, but I do think that illness is actually kind of a part of the human experience too. If we could get rid of the flu, we should get rid of the flu. I'm not pro-flu. But yeah, I think it's a fantasy. It's not well-defined. And insofar as it is well-defined, it doesn't work. I mean, hell, there was a paper not too long ago where a bunch of people got together to try to define AGI because they'd heard too many people saying that it wasn't well-defined paper came out and was immediately dinged because some of the citations in the paper weren't real and had been hallucinated by AI. So yeah, it's going real well, is what I'm saying.

    Nandita Bajaj (29:02):

    Oh man. And then critical to the dream of a AGI, of course is Moore's Law, the observation that computer chips keep getting more powerful as engineers pack more transistors onto them, roughly doubling that power every two years. These technologists often assume AGI will arrive through continued exponential growth in computing power as described by Moore's Law. How well is that assumption holding up now?

    Adam Becker (29:27):

    I mean, look, the one thing that we know is true of any exponential trend is that it ends. Gordon Moore certainly knew that about his own law. He said, oh, I think this was somewhere around 2000. He said, Moore's Law's got to end, and if it keeps going at the rate that it's going probably sometime in the 2020s. And you know what? He was right. And the reason he knew that, the reason he was able to predict that is Moore's Law says that the number of transistors that you can pack into an integrated chip of a given size will double every 18 months. This means that the transistors have to get smaller. The chips are made of silicon. Silicon, like everything in our lives, is made of atoms. Atoms have a particular size. You can't make smaller silicon atoms. That one I'm pretty solid on. That's not going to happen. And you also can't make a silicon transistor out of something smaller than a single silicon atom-ish. And so once you get down to the size of an atom, you're done. And that is essentially where we are or what we're on the verge of right now. And so Moore's law, depending on who you ask, is already dead or in the process of dying.

    Alan Ware (30:42):

    I thought it was interesting you included the Stanford, MIT, Good Ideas Are Getting Harder to Find paper, that Kurzwell assumes Moore's law in everything. A lot of these technologists do. Could you describe some of what these found in that paper?

    Adam Becker (30:56):

    Yeah, because absolutely right, Kurzweil assumes that Moore's law is like an iron law of the universe and that it will continue to hold, not just for transistors, but for absolutely everything, that there's this larger exponential trend. And he's not alone in this. I mean, Sam Altman had an essay called Moore's Law for Everything. And then I talk about that in my book as well. This is just a widely believed thing, and it's not just OpenAI. Dario Amodei has a piece called Machines of Loving Grace, where he basically makes very similar claims about what could happen. But I think we all know on some level that there's such a thing as low hanging fruit - that no matter what it is that you're doing, there's always going to be the easy parts. And then as you keep going, it gets harder and harder to keep progressing at the same speed.

    (31:47):

    Think of a jigsaw puzzle. You've got the edge pieces and finding those is relatively easy, and getting them to line up is relatively easy, but you know what? The edges are one dimensional and there's less of them. Most of the puzzle, especially if it's a big puzzle, is in the middle, and that's harder. And so it gets harder and harder to finish the puzzle. Yes, that's artificial. That's not the same as discovering new science or inventing new technology or creating new art. But think about learning a new skill. Part of the fun about picking up a new hobby is you get a lot better at it really quickly and then you plateau and then maybe you break through that plateau and you progress again and again. Not that long ago I took up cycling up steep hills and discovered to my amazement and delight that I only had to do it a few times and I got a lot better really fast. The first time I went up this particular hill, I was huffing and puffing and had to stop four or five times before I got to the top, and then I could do it stopping once. That's the way that this goes.

    (32:52):

    This paper kind of suggests that everything's kind of like that, and that finding new ideas gets harder in any given field. And as we have more and more people doing science or building technology, it gets harder and harder to find new ideas because there's just been more and more people looking at this stuff for longer and longer. Do I fully buy that that paper is correct in all of the particulars of its arguments? I'm not sure. I would like to see more work done in that area. Do I think that it's very reasonable to assume that many things obey a law of diminishing returns?

    (33:34):

    Yes. And some of their particular examples are incredibly telling. They look at Moore's Law because of course they do. They know that if you're going to make the kind of claim that they're making, someone's going to trot out Moore's Law. And what they point out is, yeah, there's this doubling trend that's been going on, but the amount of money and resources that's been poured into each doubling of Moore's Law has been going up and up to the point where achieving a single doubling of the number of transistors in, I think it was like 2018, is something like 20 times the resources that it took to achieve the same doubling like 50 years earlier or something like that.

    Alan Ware (34:19):

    So right now, the Moore's Law and the AI boom and the hope of AGI rests on the performance of large language models known as LLMs. What do you think LLMs can realistically do? What are their fundamental limits, especially when we factor in energy, materials, cost, any other weaknesses you see inherent to large language models?

    Adam Becker (34:38):

    I am not enough of an expert to say where the limit to what an LLM can do might lie, but I know enough to say three things. First of all, they are always going to require enormous amounts of energy, data, you name it. And they're starting to run up against those limits. Second, fundamentally what they do is they predict the next thing in a sequence based on training data. And so that means that they are always going to be better at doing the kind of thing that's already in the training data rather than doing something fundamentally new. They'll be better at interpolating rather than extrapolating, which I think really puts the lie to ideas about LLMs really accelerating science, which is something you'll hear over and over again, especially from Sam Altman. And third, they are fundamentally random. And because of that and because they are fundamentally about predicting the next word, they will always hallucinate, because hallucination is all they do.

    (35:44):

    They have no notion of truth and falsehood. What they know is what is likely to happen next in the given sequence and whether or not that realistically reflects what's in the actual world, what's actually possible, that's not something that enters into the calculation at all. I was actually just yesterday reading a really lovely piece that I haven't finished reading by Anil Seth about LLMs and AI and the brain. And one of the points that he makes is similar approaches have been used for things that aren't language like alpha fold. No one thinks that alpha fold might be conscious. It's just language usage creates this phenomenon that I talk about in the book, pareidolia, the appearance of the illusion of a pattern or of something human, where in fact it's just randomness.

    Nandita Bajaj (36:37):

    And you've seen this and we've seen this, we've had a few guests talking about this trend that many people in Silicon Valley, these tech billionaires identify as longtermist, and we've had folks like Émile Torres and Alice Crary to expose some of the ideology. How would you describe the moral framework of long-termism, particularly ideas like total view utilitarianism, the repugnant conclusion and existential risk, and furthermore, how does that moral framework justify their calls for extreme growth in population? A lot of these people are also ardent pronatalists, energy use and technological power and all of that growth.

    Adam Becker (37:24):

    God, that's a big question. I'm not going to be able to define all of those terms and answer the question. So the really short answer is read my book, the slightly less short answer, it's tempting to call this view utilitarianism on steroids or like galaxy brain utilitarianism. I actually think that in a way what's more striking about it is its obsession with metrics, quantification of absolutely everything, which I suppose you could say, well, but that's utilitarianism and yeah, that's probably right. But there is a notion that absolutely everything is measurable and thus only what is measurable matters. Because what longtermists believe is that people in the future count and are overriding moral considerations for the present. But they take that in a way that I think perverts the idea of ethics itself. Because of course, we should consider people who come after us in our actions.

    (38:34):

    We absolutely should. And we do that when we do things like say, well, we need to preserve our biosphere. We need to not use all of the natural resources. We have to think about the people who will come after us. Although with some of those things, we're getting to the point where, no, we need to think about ourselves in 10 to 15 years, but still, but the longtermists take this and say, oh, well, but if one happy person is great, then 10 happy people is 10 times as good, and a billion happy people is a billion times as good. And so the most happy people is what we should be going for, and that means the most people and then make them happy. And then this leads you to this thing, the repugnant conclusion, that a very large number of barely happy people is better than a world in which the overall population is smaller, but people are living more fulfilled lives.

    (39:26):

    And this can go to pretty extreme places as I described in my book, but this means that not only are they pronatalist, but they're also big fans of not just interplanetary colonization, but interstellar galactic, intergalactic colonization, which that's not happening, not going to get into that. But all I will say is a light year is really big and the nearest stars are light years away, and the nearest galaxies are millions of light years away. And if you want to know more, you can read my book. I could do a whole podcast on why that's not happening, why we're not leaving the solar system. But it leads to this obsession with growth, this obsession with making sure there's lots of people with seeing the world as a set of resources that we can just extract and seeing anything other than humans or conscious organisms who are experiencing happiness as a wasted opportunity as opposed to the natural world, at least some of which I think we have no right to interfere with.

    (40:33):

    It's a really perverse and extreme ideology that ignores limits and ignores the real plight of people here and now in exchange for a hypothetical possibility in the future. They talk about existential risk. They talk about the possibility of a super intelligent AI killing us all. That's not going to happen. That's another thing that we could talk about all podcast long and we're not going to do. What about risks that don't rise to the level of existential? What if they're wrong about what constitutes an existential risk? I think, and there's very good arguments that I have to back this up, that they're completely wrong about the existential risk of AI. They think that's the single most important problem. The fact that we can have such a wild disagreement about that indicates that something's up with their moral framework. If they're saying no, that's the most important thing, and we have to do that, and I'm saying that's a distraction from the real problems of the world, and they're also using their power and influence, and they have a fair amount of it to push people to work on that at the expense of other problems. Toby Ord, one of the biggest names in this community, has said that the risk of human extinction over the next hundred years or unrecoverable civilizational collapse from a runaway AI is 50 times greater than that from global warming or nuclear war combined. I think this is a profoundly ill-informed and irresponsible statement to make, and I think pushing that message is wrong. I think he's done a great harm in the world by doing that.

    Nandita Bajaj (42:11):

    And to our dismay and yours, so many people are buying into this, especially because they've co-opted a term like long-term, which long-term thinking has existed. And for example, the Haudenosaunee principle of seven generations, but a lot of people just think long-termism kind of sounds like that. So it must be a good moral framework, but as you just said, it's delusional at best, dangerous at worst.

    Adam Becker (42:39):

    Yeah. If Musk has said that long-termism is a close match for his philosophy, but what I find myself thinking about actually a lot is Bezos saying, yeah, if we've got a trillion people living in the solar system, we can have a thousand Mozarts and a thousand Einsteins. I'm like, okay, we've probably got people of that level of potential and capability living and dying in poverty right now. What about those people? Because one of the things that this does is this, what I call in the book, the ideology of technological salvation. It just gives everyone a whole pass, get out of jail free card for thinking about the impact that their actions are having here and now. And the longtermist will say, oh, but we donate a lot to people in the developing world and interventions that will improve healthcare outcomes. And okay, first of all, there are some arguments that actually the work that they're doing there is not that great.

    (43:28):

    But putting that aside, what about systemic change? What about why there's such massive disparities in the world in the first place? This is a set of philosophies that just reinforce extractive logic, global capitalism, colonialism, all of these ideas, all of these frameworks of power that we have in the world that have led to such an unequal and unjust world and have ultimately led to the erosion of democracy all around the world. These are problems that you can't solve by donating a few malaria nets or even a large number of malaria nets. That's not how you deal with those problems. And if we don't deal with those problems, I would argue that that's an existential threat to humanity, but also what gets to be an existential threat and the people for whom threats to them yet to be called existential, these are questions of power, and that is something that the longtermist framework is not built to think about. Well, and there have been instances of people concerned about AI risk or people who subscribe to this kind of long-term as thinking, being given opportunities to exercise power and doing it in a really naive way or in some cases criminal and more broadly, this ideology of technological salvation that all of these different things are kind of manifestations of, are ways of excusing the existing power structure and allowing it to persist in order to aim at an impossible utopia and save us from a non-existent apocalypse.

    Alan Ware (44:58):

    And you talk about some of the biggest believers of technological salvation or the effective accelerationists.

    Adam Becker (45:05):

    Yeah.

    Alan Ware (45:06):

    Could you go on about them for a bit?

    Adam Becker (45:09):

    I'd really rather not. They're so unpleasant. I'll go on about them for a little bit so we get time to get to more stuff. The effective accelerationist think, oh yeah, this is all great. They're very explicit about this. They say, oh, what we need more of is growth and consumption, and we just need to go faster and make progress go faster and make AI go faster. And okay, once again, first of all, you're not defining any of your terms. What do you mean by go faster? Make progress, go faster, is like, I would argue probably hopelessly, I don't think that you can actually define that. How fast does progress go? Well, one year per year, last time I checked, but it's so nakedly uninterested at best in the evils perpetrated and harms perpetrated by capitalism, colonialism, growth, you name it. And that's at best. I would argue that a lot of these people actually think that those harms are good, actually.

    (46:16):

    A lot of these people are very openly racist and have professed a belief that in not just eugenics, but in discredited racist pseudoscience that says that some people are just born better than others, and that that lines up with socially constructed racial boundaries. And all of that's nonsense and hateful nonsense. But these people believe that they know better than the experts. And I don't just mean the effective accelerationists. I mean the tech billionaires and the people in these other movements, longtermism, effective altruism. There's just a persistent belief that we know how to do it right. Everyone else is getting it wrong. We know how to do science better than the scientists. We know how to do ethics better than any other ethicists who have come before us. We know how to make the world a better place to live than anyone else who's ever been alive.

    (47:11):

    Like, well, these are questions that have been debated for thousands of years, and it's not because everybody else who's ever lived was stupid. It's because these are difficult questions and we have specialists and experts for a reason. It's because the world is a fundamentally complicated place. But if you believe that you are wealthy, successful because you are smart, and if you believe that you are the wealthiest and most successful person who's ever lived because you're the smartest person who's ever lived, then you're sort of forced to believe that the world is a simple place because anything that you can't understand must not be important or must not be real. And you can't possibly be an expert in everything because the world is just too complicated. And frankly, humans have been around for so long that there's just so much knowledge that you could never learn in one human lifetime, but none of that can possibly matter. And so instead, your simplistic ideas must be correct, and anything that reinforces your power, well, that's probably good too, because you know better than everyone. And so if someone says, well, yeah, there's a bunch of scientists who think that racist pseudoscience isn't real, but what do they know? I know more about genetics than geneticists. I know more about the genome than genomicists. I know more about the history of human civilization than historians because I have a net worth of a billion dollars. And if they were so smart, how come they're poor?

    Nandita Bajaj (48:34):

    And this is the kind of stuff that leads to people like Malcolm and Simone Collins to be platformed.

    Adam Becker (48:42):

    I'm sorry. I'm sorry. I just can't take them seriously. They're so ridiculous. They're seriously awful people. But they keep getting platformed. And let's be very clear, Malcolm Collins slapped his three-year-old son really hard, right in front of a journalist. That's horrifying. Yeah, they keep getting platformed because a) they are awful in a way that drives clicks, the rage bait and click bait. But I wouldn't be surprised if Malcolm Collins decided that he wanted to do that in order to drive more clicks. But also maybe not. Maybe he does that all the time. Which one's worse? I don't know. But I mean, look, that's eugenics right there. And also their idea, it's not just racist. It's also stupid because it won't work, and it won't work because it turns out they're just wrong about how genetics works. They're wrong about how raising kids works, right? Oh, I want to have a large number of people agree with me about things. What I'm going to do to do that is have an enormous number of children and then indoctrinate them. Because you know what never happens is kids never disagree with their parents. And when they're adults, they always agree with what their parents believed, right? That's something that's just true. I mean, everybody knows that teenagers are really well-behaved and hate rebellion. It doesn't make any sense for any perspective. So yeah, I mostly think that they are ignorant attention seekers who are probably abusing their kids. Oh, also racist and misogynist. Yeah.

    Nandita Bajaj (50:26):

    Right. Misogyny. We haven't, haven't even haven gotten to that.

    Adam Becker (50:29):

    Yeah, there's lots of misogyny.

    Nandita Bajaj (50:32):

    Yes. But you also suggest that beneath the ideology of technological salvation that's undergirding all of this work is a big fear. It's a fear of an end, fear of death. Can you briefly speak to how that fear is shaping their relationship to nature, to ecology, to limits on the planet?

    Adam Becker (50:53):

    Absolutely. I mean, look, I will mention Malcolm and Simone Collins one more time and then we'll be done. Someone who wants to have that many kids and indoctrinate them to believe exactly what they believe. That sure looks like fear of death to me. And Ray Kurzweil is very open about how it was the death of his father who sort of pushed him to postulate all this stuff that he believes about the singularity. And, you know, death is horrifying, makes sense to be afraid of death. And I think it would be great if we could eliminate more causes of death. I think it would be great if we could allow people to live longer, healthier lives. But I also think that the science is pretty clear that death's not going anywhere. And also the science is pretty clear that we need to find a way to live in harmony with our environment.

    (51:41):

    I don't think that doing that is impossible, but if you just have an overwhelming fear of death and at all costs, you need not to die, it makes you very selfish and it makes you disconnected from reality because death's not going anywhere. And it also, I think we are seeing it lead to a disregard for the natural world. And even if the only thing that you care about is intelligent beings building civilizations, and I would argue that you should care about more than that, you have to let the world be in a way that other intelligent beings could evolve and have their time in the sun. One of the things that made me the most angry when I was writing this book was seeing people quote Carl Sagan as a justification for their views. And I would say, oh, you want to play the, let's quote Carl Sagan game. Fine, let's play that game. You want to come into my house? You want to talk about space and you want to talk about Carl Sagan and you want to talk about science fiction? Fine, let's do that. I was born ready for this.

    (52:52):

    But Sagan said, if there is even something as simple as bacterium on Mars, we shouldn't touch it. We should leave it for the Martians. And I think that's just true, but if you're afraid of death, nobody and nothing else matters. If you're that afraid of death, I'm afraid of death, I'd be afraid if it looked like I was about to die. But I don't let that be an overriding concern that governs my entire life because otherwise I'd never be alive. And I'm never going to have a billion dollars, and I don't want that. It's not something that's important to me. But I'd like to think that if I had a billion dollars, I'd give a lot of it away, but I'd also find a way to have some fun. I've just never seen anyone more miserable in a position of such power and wealth than Elon Musk. Can you call that living like, here's a dude who just lives in fear every day. I would not want to be him for anything. I'd much rather live a real life.

    Alan Ware (53:57):

    So the technologist worldview assumes agreement for the most part on social ends, like the longtermist focus on the happiness of trillions future humans and their utility. And it collapses a lot of the political social disagreement into these purely kind of technical problems. But democratically-based politics is messy, contingent, imperfect. It's full of compromise and it's not authoritarianism. What does democratic meaningful resistance to the tech billionaires look like?

    Adam Becker (54:25):

    I mean, like I say in the book, there's this idea that they have that you can solve any problem by throwing technology at it. And we know that that's not true. There's all sorts of problems in the world that can't be solved simply by throwing technology at them. There are problems that we have the technological solutions for that we don't need better technology to solve, like nuclear war being at the top of that list. We know how to dismantle every single nuclear weapon in the world. That's a solved problem from a technical perspective. We've just chosen collectively not to do it. We could end the threat of nuclear war tomorrow. These are problems of collective action and politics, not technology by and large. I think that part of the reason I wrote this book is that I think there's an underappreciated political power in defining what the future looks like, and these tech billionaires are trying to grab that power and run with it and set the terms on which we talk about the future.

    (55:23):

    And I don't want to let them do that because they have no idea what they're talking about and they're incredibly cruel. And so I think talking about the future in a different way and making it clear, oh yeah, that's hot nonsense. That's not happening. That's stuff that those billionaires are selling us. It's just a product that they're trying to sell us and we shouldn't buy it. So talking about it I think is part of the solution. I think being aware that the stuff we've been told about AI and the stuff that we've been told about our glorious future in space and the stuff we've been told about technology, it's all just a story and it's not a particularly well-supported one, and there's good arguments against it. I think that also, especially here in the US, there's a sense of hopelessness about fighting these entrenched powers, especially given how tight they are with the Trump administration, but there's more of us than there are of them, and I think that they have underestimated the lengths to which people will go to retain control over their own lives and not seed that control to an idiot dictator or clueless tech billionaires.

    (56:34):

    I think that also, we don't all work for the big tech companies, but a lot of people know someone who works for those companies. Encourage them to unionize, like organized political power and organized labor power are the ways out of this mess. We cannot let these oligarchs decide what the future of our species is. If we want to fight back against them, we need to take advantage of the fact that there is more of us than there are of them. And how do we do that? We organize, and some of that's political, some of that's at the ballot box. I still believe in the power of the ballot box, and we are going to have a very important election in this country this year, and Trump is not going to be able to cancel it no matter what some people online might say. But also labor power can do a lot and can do a lot quickly because it is what these tech billionaires are terrified of, that their workers will wake up one day and realize, wait, I don't like what my CEO is doing. Like, yes, the tech CEOs have lined up behind Trump. The rank and file workers at the big tech companies in the US mostly hate Trump. And so I think this is a ripe opportunity to try to bring those billionaires to heel and say, oh, you're doing something that everyone who works for you hates. They could just stop. And then what would your money be worth?

    Nandita Bajaj (57:53):

    That's a really helpful answer. And Adam, we know you have to go, so we'll wrap this up. But thank you so much for your time and for doing the really important work that, like you said earlier in the interview, only you could have done just given the overlap in interests and the background that you have in astrophysics and science. We love that you are trying to be more like Carl Sagan in your communication of science. We need people like you, and we are grateful for your time today. Thank you, Adam.

    Alan Ware (58:23):

    Thanks, Adam.

    (58:23):

    Thank you so much for having me. It's been a pleasure to be here, and thank you for the work that you're doing, getting the word out about this stuff.

    (58:30):

    That's all for this edition of OVERSHOOT. Visit populationbalance.org to learn more. To share feedback or guest recommendations, write to us using the contact form on our site or by emailing us at podcast@populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you'll consider a one-time or recurring donation.

    Nandita Bajaj (58:57):

    Until next time, I'm Nandita Bajaj, thanking you for your interest in our work and for helping to advance our vision of shrinking toward abundance.

More like this

Next
Next

Data Grab: The New Colonialism