AI and the Decline of Human Agency
AI, under the dangerous control of tech oligarchs, is creating a world with shrinking human choice, creativity, and connection. Technology journalist Jacob Ward, author of The Loop: How Technology is Creating a World without Choices and How to Fight Back, describes why restraint and resistance are necessary to fight back against the AI juggernaut. Highlights include:
How tech journalism emphasizes novelty and business profits and amplifies tech companies' hype as journalists seek to maintain access to powerful tech leaders;
How profit-driven AI exploits a human bias toward fast, easy thinking and decision-making that leads us to outsource our choices and judgment to automated systems;
Why AI large language models are like cover bands providing the 'greatest hits' of humanity's past achievement - an 'artificial hive mind' that is biased toward middle-of-the-road, derivative, and unoriginal ideas;
How impersonal, unaccountable, 'black box' AI decision-making creates Kafka-esque systems in government services, jobs, and loans - disproportionately harming the least powerful in society;
Why AI large language models are 2 to 3 times more biased than the average person across various cultural and demographic dimensions;
How AI will increase addiction and social isolation, replacing real-world relationships with flattering, always available chatbot 'friends';
Why our collective sense-making and democratic decision-making will be further threatened by AI - creating even more tightly sealed, individually customized information bubbles that conform to our feelings, not the truth;
How many tech oligarchs pushing AI are also involved in genetic engineering projects with the aim of breeding 'optimized' babies;
Why tech companies' legal liability and U.S. states' AI regulations are hopeful avenues of AI pushback;
Why we need to rediscover the value of restraint and realize that not all innovation is beneficial for humanity and the planet.
MENTIONED IN THIS EPISODE:
-
Jacob Ward (00:00:00): There's a certain kind of feverish, like “we're going to change the world for the better” jingoism at the top of a lot of tech companies, but AI is a whole new thing in that they are very openly talking about doing damage to society, doing damage to employment, but they in the same breath basically say that's all going to be worth it when we're on the far side of the utopia that this technology is going to make possible. And that kind of open admission of the risks and the zealotry is really a new, a near religious certainty they seem to have about the benefits of this technology. And we're mass adopting a system that's going to be two to three times more racist than we are. It's a real problem. And I can think of no greater threat to democracy than that.
Alan Ware (00:00:41):
That was technology journalist Jacob Ward. On today's episode of OVERSHOOT, Jacob discusses his recent book, The Loop: How AI is Creating a World Without Choices and How to Fight Back.
Nandita Bajaj (00:01:02):
Welcome to OVERSHOOT, where we tackle today's interlocking social and ecological crises, driven by humanity's excessive population and consumption. On this podcast, we explore needed narrative behavioral and system shifts for recreating human life in balance with all life on Earth. I'm Nandita Bajaj, co-host of the podcast and executive director of Population Balance.
Alan Ware (00:01:28):
I'm Alan Ware, co-host of the podcast and researcher with Population Balance. With expert guests covering a range of topics, we examine the forces underlying overshoot - the patriarchal pronatalism that fuels overpopulation, the growth-obsessed economic systems that drive consumerism and social injustice, and the dominant worldview of human supremacy that subjugates animals and nature. Our vision of shrinking toward abundance inspires us to seek pathways of transformation that go beyond technological fixes toward a new humanity that honors our interconnectedness with all of life. And now on to today's guest.
Jacob Ward is a longtime technology journalist. He's been a correspondent for NBC News, Al Jazeera, CNN, and PBS, and was once the editor-in-chief of Popular Science. In 2022, he published The Loop: How AI is Creating a World Without Choices and How to Fight Back, which predicted the commercial AI moment we're in. He's a reporter in residence at Omidyar Network and hosts the Rip Current podcast. And now on to today's interview.
Nandita Bajaj (00:02:40):
Hi, Jacob. Welcome to the OVERSHOOT podcast. Thank you for joining us. It is so great to have you here.
Jacob Ward (00:02:46):
Hi, Nandita. Thank you so much for having me here. Hi, Alan.
Nandita Bajaj (00:02:49):
And Jacob, we're excited to chat with you about your book, The Loop: How AI is Creating a World Without Choices and How to Fight Back, which came out four years ago in January 2022, and that was before ChatGPT and other AI programs were unleashed on society and on our collective imagination with no consent from any of us whatsoever. And you were quite prescient in warning us about AI's power and its potential for abuse. As you've written, tech billionaires are "writing our worst instincts into our machines for the sake of short-term profit and efficiency." We appreciate how you discuss at length the social injustices that get reinforced through the enormous and increasing power of Silicon Valley tech oligarchs. And we are also especially concerned in our work about how AI could only serve to accelerate ecological overshoot by providing rocket fuel to the destructive techno-industrial capitalism that is threatening the future of all life on Earth.
(00:04:02):
With so many concerns surrounding AI, we're thrilled to have someone on the podcast who has explored the issue so deeply and shares many of those concerns. So thank you again, Jacob, for doing the work you're doing and exposing the harms of a lot of these highly unregulated technologies.
Jacob Ward (00:04:24):
Well, I appreciate that. There's a lot of common cause here between us. God, it's weird to hear you say that it was four years ago. It was four years ago. It feels like I just finished the stupid book, but it is a little while ago. And yeah, you think you want to be right when you write a book predicting something that might come to pass. The bummer it turns out is when you are somewhat right, as I happen to be, you then have to watch your thesis sort of play out in real time. And that's been what the last four years have been like for me ever since that book came out. I thought I was several years ahead of the commercial release of an AI system on the scale that we saw with ChatGPT, but it turns out I underestimated the motivations and ambitions of the people making this stuff.
(00:05:05):
And so yeah, for me, the thing that has been so discouraging is to see that this theory that I had, that we would combine what I had just finished a big documentary series uncovering what the sort of patterns of human behavior are, the ways in which we are very predictably gullible and our brains are kind of lazy and we try to outsource a lot of our decision making, none of which is to insult anybody. It's just to say how our circuitry works. I was combining all of that with then this reporting that I'd been doing around the advent of what are called transformer models, which made ChatGPT and that ilk possible and the sort of explosion of an ability on the part of for-profit companies to mine our behavior and spot patterns in all kinds of cool ways, but specifically in the ways that human beings make decisions and start messing around with us for fun and profit.
(00:05:55):
And that is very much, I feel like what unfortunately has sort of come to pass. I think I underestimated, first of all, how fast it would move, and I didn't really understand the zealotry with this technology would inspire in the people creating it. I'm used to a certain, you know there's a certain kind of feverish, like we're going to change the world for the better jingoism at the top of a lot of tech companies, but AI is a whole new thing in that they are very openly talking about doing damage to society, doing damage to employment, and there being all kinds of existential risks to this technology. But they, in the same breaths, basically say that's all going to be worth it when we're on the far side of the utopia that this technology is going to make possible. And that kind of open admission of the risks and the zealotry is really a new, the near religious certainty they seem to have about the benefits of this technology. I didn't see that stuff coming, and I've been, I would say, surprised and alarmed by a lot of that.
Nandita Bajaj (00:06:53):
And you're quite uniquely positioned to be watching a lot of this stuff unfold. You've had decades of experience in reporting on technology to the general public through major media outlets like NBC, CNN, Al Jazeera. And from our perspective, a lot of tech reporting not only tends to amplify hype and take the claims of tech CEOs and marketeers at face value, it also ignores the broader context of the social and ecological consequences of different technologies. And as you've said, sometimes it's well-intentioned and it ends up unleashing the worst consequences. And in this case, you're saying it's not even well-intentioned. A lot of the folks behind AI kind of knew or were deliberately engaging in some of these harmful behaviors. So why does so much of the tech reporting feel it's often unskeptical and prone to hype? And what is your general approach to thinking about and reporting on technology?
Jacob Ward (00:07:58):
I think there are many, many competing pressures that make it difficult to do this kind of work well. And I consider myself to be kind of a B student in this world because it is such a hard job and the people who do it well have to do so many things well. I think generally speaking, what I recognize in what you're saying is a prior era, especially that I certainly came up in where in the '90s and early aughts, the person in my gig, a tech correspondent was basically sort of considered to be kind of an upbeat sort of nerd who would hype the tech that was coming out, would go shake hands with a robot at a trade show or freak out in the back of a self-driving vehicle. There's a sort of boosterism to it that just certainly in television made for good TV.
(00:08:43):
I mean, I have been guilty of this in my life. I have done a few of those sorts of pieces. And once upon a time I was the editor-in-chief of Popular Science magazine whose whole raison d'etre is being a booster of ways in which technology is going to make the future better. And I think generally speaking, it's hard to fault people for that instinct because for a huge amount of human history, innovation has brought fantastic results for the benefit of humanity. I mean, the world we get to live in now, the comforts we get to enjoy because of science and technology are incredible. So there was that phase, I think, of reporting that was very, very upbeat and as you say, uncritical. And then I was lucky enough to work at Al Jazeera where the point of that place was to, in many ways, understand inequality around the world and how technology in terms of my beat was deepening that inequality.
(00:9:36):
And so that was a place where I got to lean into what is for me, a much more instinctive position. I grew up the son of a historian of slavery and a very labor activist nurse. And so I come by my anti-authoritarian instincts pretty honestly. And once we got into the social media era, that same sort of uncritical treatment of that industry created a huge amount of boosterism that was going on early. And then we saw those things begin to connect to big geopolitical events like the Arab Spring where social media kind of gets the credit. But that's the point I think at which journalists in general began to understand, oh wait, our digital lives and our real lives are actually the same life. We shouldn't be thinking about those things as separate from one another. Our digital lives are our real lives increasingly. Now we're in an era where I think some very smart people are covering this stuff with a much more critical eye, with a much clearer sense that there's a profit motive here on the part of these companies, but there are still a bunch of competing facets.
(00:10:36):
So one is some of the best resourced reporting organizations are very often business newsrooms, and those are aimed at an audience of investors. So they're less concerned with the social, environmental and other impacts, unless it's going to affect the bottom line of the company. But that's a kind of reporting that I always liken to like sports reporting. It's like a fan's take on this stuff. And so there's an uncritical quality to that, at least in as much as the societal effects of tech. The last thing is just how fast this stuff moves and yet how systemic its effects are. So it's easy to get caught up in the day-to-day release of here's the latest crazy thing that's been released without stepping back and looking at it. I used to bump into this problem all the time. I had one anchor I worked with who would add the word tonight to every script, so that I would turn in a script that says, human agency is being attacked by AI and we're losing essential skills.
(00:11:33):
And then the word tonight would get slapped on the front of that. And tonight, human agency is getting blah, blah, blah. It speaks to the competitive pressure that news organizations are under. One producer said to me, "Listen, we report the news. We don't report what might happen." That's just a sort of a fundamental DNA problem when it comes to addressing these big societal and systemic effects of these systems through a megaphone that is typically about a bus crash, a vote on the floor, a thing that happened that day. So I feel like we're in, I think, a new world around this stuff, but there's still some big structural challenges to covering this stuff well.
Nandita Bajaj (00:12:13):
Totally. And I really appreciate that the focus that you put on the reporting being done through this already kind of technologist lens when the news agencies that are covering it already have that investor base. And so there's a bias to how you report with that kind of novelty and excitement. And I also like that you use the word religion. It's kind of become part of the modern religion. And we've talked to another guest, Michael Harvey, who wrote The Age of Humachines, and he talked about this concept technologism, which is this notion that equates human progress with the progress of technology. So yes, there have been great innovations that have helped society, but in reality, technology for the most part is first and most available to the elites of the society, and then they get to decide how to unleash it and what's beneficial for the rest of humanity gets dictated through the very kind of narrow lens of a small number of people. And, as you said, it's really leveraging our own gullibility to this kind of hype.
Jacob Ward (00:13:30):
Yeah. I mean, I think that the ways in which human beings are built, people are constantly saying to me, "Well, what's different about this? Isn't it just like the advent of the television or the printing press or the internet or social media?" We've come to grips with some of that before, so don't worry about it. And my answer is, well, this is a very different thing because it plays on a very particular human foible, which is our programming that has us wanting to outsource our choices whenever possible. The last hundred years of behavioral science, and I got to go all over the world and interview a ton of people for a PBS series called Hacking Your Mind on this subject, that field has basically established that the reason we are all sitting here together is that we are the branch of the human evolutionary tree that made very rapid decisions, learned how to very quickly communicate danger between us without even really having to talk about it, basically refined an instinctive decision-making system.
(00:14:35):
So the problem with that is, in the modern AI context, is when you vacuum up all of the ways in which that stuff manifests itself in our modern information ecosphere, all the things we've written on the internet and the movies we make and all of that stuff, and then you give it back to us in the guise of a infallible, omnipotent answer machine, we love to say, "Well, this thing obviously knows everything. I should take all my advice from this thing.” I fall into this. I spend my whole career thinking about this stuff and I fall into this. And so we are suddenly, I think, playing with fire to an extent that people didn't understand when they were making this stuff because the other thing I've found is, as you say, there's near religious certainty that innovation is good. And I have a new book project I'm kicking around right now and the title of the book is, Great Ideas We Should Not Pursue.
(00:15:26):
And it's based on something my grandfather used to say. He was an academic who was sent to India by the Ford Foundation in the '50s to try and set up educational systems there. And all these CIA guys and Wharton guys would come to him at cocktail parties and say, "Here's what we should do in India." And he'd say, "That's a great idea. We shouldn't do that. Let's not do that." And it's that kind of restraint that I think is actually one of the defining characteristics of the great choices that human beings have made in addition to all of the great innovation we've done. Anyway, when I tell people in Silicon Valley the name of that book, I was on a podcast the other day, I told a guy that thing, his head popped off. He was so angry at that concept. He said, "What are you talking about?"
(00:16:02):
Of course we're going to build it if we can build it. And you get this attitude from scientists too. "If the data takes me there, I got to go there." That's the training of science, right? And so restraint is not a thing we have a lot of cultural experience with. And you combine that with then all of this pattern recognition technology being thrown at a very pattern-driven species. And I think we're in really new territory here.
Alan Ware (00:16:24):
Yeah. And in the book, you talk about the first loop, second loop, and third loop, and it sounds like you were beginning to describe those loops. Do you want to provide an overview of those before we go deeper?
Jacob Ward (00:16:35):
So what I sort of came into was this understanding that when you combine human instinct with the profit motive, you get into a kind of recursive pattern, a loop where the profit center can't help but try to limit choice in human beings and drive us to a place where our instincts are in charge because we're easier to sell to when we're that way. So I've sort of identified these three loops. So the first loop is what we've been talking about here, this ancient circuitry that makes the vast majority of our choices for us and gets us into patterns of life that we don't even really recognize are happening when it comes to then what I describe as the second loop, which is the way in which modern profit motives play on that loop of instinctive decision making. And that's where you get things like cigarettes and pornography and addictive video gaming.
(00:17:36):
And I would argue social media. These are all forms of behavior modification that play on and turn out to limit our choices wherever possible when you can make a buck off doing so. And the third loop, the one that I'm really worried we're entering right now is a vortex of diminishing choices in which we are shown an AI system that looks as if it's expanding our choices and giving flower to our creativity and freeing up our time when in fact what it is doing is creating a systemic limiting of choice that serves to corral us more and more into a fewer and fewer number of choices. And that is both, I think, a function of the profit motive, but it also turns out to be a function of how the technology works. These systems are just greatest hits machines. They don't have original thought. All they do is create a medley of past hits.
(00:18:31):
It's a cover band for humanity, right? So they're not going to play a new song. They're only going to play old songs or medleys of old songs. And as a result, you're in a world where what feels like something new is in fact the same soup just being stirred and you're at a different side of the pot. And we're starting to see this actually come out in real research. It's not just anecdotal. So there's a big academic conference every year that sort of sets the tone for how AI research goes. It's called NeurIPS and just took place a few weeks ago and they have awarded sort of the top papers of the show. And one of them was from a multi-university team, Carnegie Mellon, University of Washington, a bunch of places. And they had this incredible paper. I can't remember the subhead of it, but the head of it was Artificial Hive Mind.
(00:19:17):
And what it was showing was what they basically did was they took like 25,000 or more creative prompts. So these are open-ended requests for creative output, things like, write me a poem about time, and then they put them into the 70 or so LLMs that are available out there right now, these ChatGPT-style programs, and then they measured the output. And here's what they found, and this is the really disturbing part. They discovered that not only does each individual LLM increasingly over time deliver a narrower and narrower set of responses to an open-ended creative prompt, write me a poem kind of stuff. Over time, it gives a tighter and tighter set of responses, and the 70 LLMs converge on the same set of responses over time. And this is, again, creative, this is not fact. So more often than not, all of these different LLMs, when you ask it for a poem about time, will say, time is a river, blah, blah, blah, blah, blah, blah, blah, and give you this kind of trite college freshman kind of effort of a poem about time.
(00:20:24):
And so the problem is that you're suddenly in a world where you think you're accessing all of human creativity through this thing, but you're not. You're accessing a sort of greatest hits machine that wants to give you what it deems to be the most sort of acceptable answer. It doesn't tell you that that's what it's doing. It just says, here's the answer, here's the thing. And even if you were the kind of person who said, "You know what? Today I'm going to mix it up and try and get a different perspective and go over here to Claude or go over here to Lama or one of these other LLMs", it turns out you're not actually drawing from a bigger canon of work. You are instead entering what these researchers are calling the artificial hive mind. And they literally say in the paper that it's a threat to pluralism, that we are entering a world in which we might inadvertently wind up doing away with diversity of thought because we're going to use these systems to do our thinking for us.
(00:21:11):
Again, this is all happening so much faster than I realized. And it's part of why I've been mostly surfing the last year because it's just such a bummer to watch my stuff come to life here.
Alan Ware (00:21:21):
Yeah, that is concerning that divergent versus convergent thinking and divergent being essential for creativity and new insights. And if all of these models are following kind of a bell curve of probabilistic convergence, what that will do to the blandness of cultural products from writing to music. And it's hard to imagine scientifically how it could necessarily push any frontier.
Jacob Ward (00:21:45):
Well, that's right. We're all sort of relegated to just sort of being kind of clerical workers for this regurgitated body of knowledge in this weird way or kind of marketers of it. I don't even really know. But yeah, I mean, thinking about the three of us, the three of us are sitting in two different nations. We come from different backgrounds, different genders, different generations. We are, in theory, equipped as humans, the three of us, to really come up with some pretty broad ideas in something like a brainstorm. But there's also new research that came out. I think it's MIT that showed that if you stick the three of us on a brainstorming project using LLMs, we will come up with a much narrower set of responses than if you stick us on a brainstorming project, even using Google. We here, the three of us, have the horsepower to come up with all kinds of creative ideas if we were to draw on our own experience.
(00:22:32):
But if we instead begin just drawing on these LLMs, because again, our circuitry tells us to, I think we could wind up in a world where what feels like a brainstorm turns out to just be a performative kind of regurgitation of a greatest hits machine, of a cover band's song sheet.
Alan Ware (00:22:48):
Yeah. And building on the divergent and convergent thinking, I was just reading the scientific peer review paper. Publishers are getting inundated with AI- generated articles with make belief citations, and they're trying to use AI to determine which of the papers are AI written. What will that do to scientific research is a concern.
Jacob Ward (00:23:09):
It's happening in journalism where friends of mine who review manuscript suggestions or pitches from freelance journalists suddenly are awash in AI generated pitches. And so are we going to then be in a world where those editors are going to require some sort of AI filtration system? I mean, this is what's happening with the job world right now. There's a new lawsuit in California that was just announced against a company that does basically the sort of AI resume review software and these highly, highly qualified software developers with incredible amounts of experience and incredible education are getting like 0.03% response rates because their resumes for some reason just aren't making it through these automated systems. And this kind of like automated filtration of life is really becoming a problem. We're talking about with like, am I going to get hemmed in with my request for a poem about time?
(00:24:07):
That's a high quality problem. Getting a job, that's another problem. I've interviewed the head of a law clinic at the University of Baltimore. She's a law professor and she and her law students will go and try to help people who've been wrongly denied their welfare benefits, their SNAP benefits, their housing benefits. And they go into court to these administrative judges and say, "We've been denied and we can't get anybody on the phone. There's no recourse here. What are we supposed to do? " They have to go to incredible lengths. You got to get a pro bono law professor to get in there and start mixing it up. In doing so, they'll get people in from the agency, the state agency that did the denial and they'll say, "I don't know why it was denied. We use this off-the-shelf AI system that makes the decision." So then they'll subpoena the people who make the system, the company, and a representative from the company will show up and they'll say, "We don't know how it makes decisions either."
(00:24:59):
This is the other problem. So it's this Kafka-esque loop that we are stuck in of diminishing choice, diminishing expertise that CEOs love. They are openly talking about how much they love it and they are firing people left and right, laying people off left and right because of the short-term efficiencies you can achieve replacing entry level workers, customer service workers, bank tellers with these sorts of systems. The CEO of Wells Fargo was just on stage at an investor conference in New York a couple weeks ago and he openly said, "We're going to be a bank that uses fewer workers in future." And I've been interviewing Wells Fargo employees who say they've gone from being three tellers in a bank to one teller in a bank, and that increasingly you're eventually going to be in a world where the only thing you're having an interaction with is on an app somewhere.
(00:25:48):
And at that point, when your bank account suddenly comes up zero and there's nobody left at the community bank that you used to go to, who are you talking to? How are you figuring this stuff out? So there's an enthusiasm for this when it comes to company leadership, the stock market, the places that make the money off this that belies some really big structural challenges we're going to face in providing benefits to people and needing a way for them to make their case when they get denied. There's a very smart researcher at the Brookings Institute named Molly Kinder. She's got a whole great thesis about how the entry level and second level jobs are all going to get wiped out and that as a result, you're going to wind up with nobody qualified for that management position in four or five years. They're going to empty the pipeline of qualified people who can do a complex kind of management role.
(00:26:41):
And she thinks that this is just going to fall especially hard on non-college educated women, that AI is going to do to non-college educated women what outsourcing and offshoring did to non-college educated men who used to work in manufacturing in the United States. And so yeah, I think there's a big structural and social shifts that could be created by this that are definitely not being thought about at investor conferences.
Nandita Bajaj (00:27:03):
Yeah, definitely. And this reduction of human life experiences and choices into this kind of automated regurgitated responses, you've talked about how there's bias that humans have toward kind of the shortcut thinking. And you've written that what really motivated you to write the book wasn't the sophistication of AI itself, but the growing amounts of human behavioral research showing how automatic and biased our behavior can be. What are some of those automatic habits and biases that you think are particularly significant in making us so gullible within this industry?
Jacob Ward (00:27:46):
So the best known example of this is the one popularized by Daniel Kahneman. He wrote a book called Thinking Fast and Slow. So he's a psychologist who won the Nobel Prize in economics for his work with a guy named Amos Tversky, who was his partner for a long time, on trying to basically put some math to the ways in which human beings make mistakes when you ask them to do information-based tasks and they don't have enough information or they don't have enough time. They helped us identify this instinctive decision-making system that it turns out is making so many of our choices for us. And his way of thinking about it built on generations of work that posited what was sort of called this dual process theory of the brain, that essentially we have these two minds, not physically, but two modes that our brains use, a fast thinking mode and a slow thinking mode.
(00:28:34):
The fast thinking mode is the one that we've been talking about here, a circuitry that tells you to eat that red apple without even you having to be like, "What kind of apple is that?" And has you up and running from a snake without being like, "Is that a garter snake? What kind of snake is that?" You just go. There's a great line from Cedric the Entertainer, the comedian, who says that when he sees Black people running, you can't help but run, that it's just like as a Black person, he just can't help it. And he is summarizing like a hundred years of behavioral science with that joke because it's absolutely the way that our brains work. So our slow thinking brain is this much newer, glitchier system in evolutionary terms. It's probably only a couple hundred thousand years old, whereas the other system is millions of years old.
(00:29:13):
The newer system is the one that had us thinking like, "I wonder what's over there and what happens after we die and am I dreaming when I sleep?" That kind of curiosity and cautiousness and rationality is very new and rarely gets engaged by the brain, is what all of these researchers have told us. We just rarely ever stop and actually think a thing through. And we really only do it when there's an egregious error made by our instinctive system. So the best example of this I'm always coming back to is if you've ever driven to the wrong place by accident, so whenever I have to take my kid to a dentist appointment in the middle of the morning, I'll always accidentally drive them to school first. You get to school and you go, "Oh, where am I? Oh, school. No, no, this is not right. I got to go to the dentist's office."
(00:29:56):
And the fact that you drove there without thinking about it, that you could somehow on autopilot guide a metal box with an engine in it among a sea of other death machines and get to your kid's school without dying, that speaks to the sophistication of that automatic decision making system. Well, we do this in the documentary. They took me to England and they threw me into a car there. You try driving on autopilot in England. You can't go a city block without engaging your whole brain, your entire ... You're just vibrating with effort in thinking, "My God, where am I turning? Oh, Jesus, I'm turning right and it's a left. What is the ... " So that duality of processing is the big headline. The vast majority of our time we're under autopilot and a very small amount of time is spent thinking a thing through.
(00:30:53):
And then under that heading and under the instinctive fast thinking brain heading, you just have bullet point after bullet point of all of the instinctive biases of that system. And that includes not just our ability to make sort of instinctive choices about X, Y, and Z. It also has incredible prejudice about people who are different than us, right? In the same way that Cedric the Entertainer says that as a Black man, he can't help but run when other Black people are running. When you see other people who are like you doing a thing, making a choice, then that choice feels way more right to you than it would if someone who doesn't look like you is making that choice. That's a very established piece of scholarship. And in the realm of actual bias, like true, like for instance, racism, we had been making progress. There's a tremendous researcher named Dr. Mazarin Banaji.
(00:31:42):
She's a professor at Harvard who coined the term implicit bias. And she and a colleague have published many, many, many amazing papers documenting. So they've kept track. There's basically an online questionnaire that will trick you into revealing just how racist or sexist or ageist or bodyist you are. And they've had that thing up since the '90s on the internet. And as a result, they've got a timeline of racism and gender bias and everything else that's almost 35, 40 years of data. So they have the true pulse of where that's going in humanity down to a zip code level. And for a while we were making progress. Racism was pretty steady. Gender bias was pretty steady, but getting better, a little bit better. The one that had been going really well was LGBTQ bias. We were becoming much less biased against members of the LGBTQ community.
(00:32:35):
Suddenly now, her colleague just put out an op-ed in the New York Times announcing this. They just put out a new paper that shows that that stuff is all going the wrong way right now. And the thing that has been especially erased is the gains in LGBTQ attitudes. We had been doing so much better about that. And a new paper that Mahzarin Banaji I think is releasing next month shows that LLMs, which she's been studying, ChatGPT and the like, are two to three times more biased in all of those variables than your average human, two to three times more. And that's not even counting places where the program has no cultural competency. So in places like India, India has a whole problem when it comes to LLMs and caste, right? So for anybody who doesn't know, caste is a historical part of Hinduism in which you're reincarnated as someone either higher or lower on the social spectrum.
(00:33:31):
And the highest level, the Brahmin, lowest level, the Dalit, turns out there's deep bias baked into these Western invented systems, these LLMs. When you ask them questions about Brahman or Dalit, they say, who should get a job? It always tells you Brahmin. You ask for a picture of a Dalit, it'll a lot of the time give you a picture of a dog, like bad stuff that is a function of having hoovered up all of the sentiment that it can find online and there being no cultural competency in a Western company in trying to think about this stuff. And we hate to admit our own biases. No one likes to admit they're being racist. The current administration is trying to wipe out all kinds of research that talks about like diversity and equity and inclusion, which are true efforts to fight an ancient instinct that science has shown us is a problem.
(00:34:20):
And the fact that we're in this realm right now where we're allergic to even talking about it and we're mass adopting a system that's going to be two to three times more racist than we are, it's a real problem. I had a guest on my podcast, The Rip Current named Aza Raskin, who is a partner with, -intellectual partner with, Tristan Harris on a lot of work around tech criticism. And he made the great point that human beings are so much more valuable when we are distracted. We're just more valuable as a commodity when we are distracted. And that is unfortunately, I think, the way in which these systems are getting deployed. The most profitable and common use is stuff like creating an AI girlfriend, AI porn. I mean, that's the stuff that is going to make you money. Just the other day, there's an engineer from xAI, which is Elon Musk's AI company who was just on a podcast.
(00:35:10):
It was sort of the podcaster's dream. This guy comes on and he spills all these beans about how Grok is being built, the LLM that Elon Musk is building, and the ecological shortcuts they're taking and the ways in which they've used these land permits that are typically preserved for carnivals to put up these huge processing centers. He really gave away the game and I think got fired right afterwards. He is quote unquote leaving the company now because I'm sure he got fired. At one point he described that one of the ways that they are powering the turbines that cool one of these big data centers is there's at least 35 methane-fueled turbines that are so polluting, even the Trump administration EPA has deemed them illegal. And so it's like you can just sense in the rhetoric, this is all worth it. We're going to run roughshod over the moment we are in because it's going to facilitate this incredible future.
(00:36:05):
And so that linear assumption I think is really leading to some really big short-term damage that these guys in their minds seem to feel that is worth it and that we as a society don't yet have the sort of civic systems to push back against.
Alan Ware (00:37:16):
Right. It's unfortunate that the profit-making loop is taking precedence over some system-one elements like our egalitarian inbuilt instincts that go back to hunter gatherers to primates. There's plenty of research evidence that we don't willingly go into dominator type hierarchies. Empires had to be established. People had to be conquered. So we have that more egalitarian instinct. We also have a real biophilia, a love of nature. So both of those instincts, with AI and who that's being controlled by, are being reduced. What are some other examples? You've given some examples of the automatic habits that it's exploiting, but you talk in the book at length about addiction, how it uses that, our thinking shortcuts that we're cognitively fairly lazy and that, like you said, the gravity of law in this way, the gravity of AI will fall hardest on the people like in Baltimore that are being denied benefits by this black box AI. And you've talked about the wide range of risk. Are there others that you'd like to highlight?
Jacob Ward (00:37:21):
Well, the big one that I'm worried about right now is you mentioned addiction. So addiction is for me hits home. I'm a recovering drinker. And one of the reasons that I'm so interested in this topic is not because I feel like I've got solutions to it, but because I'm such a prototypical addict in the making. I am made for that stuff. I'm the kind of guy who doomscrolls TikTok to the point where eventually a video comes on from TikTok that says, "You should go to bed now." That's how bad I am. When the drug dealer says, "You should go home, you know you have a problem." So I'm in no way suggesting to anybody that I've got the answer here. I just know that I've got the disease. And so I'm trying to point it out to other people like me who will have some trouble resisting.
(00:38:00):
So one of the things about addiction that they teach you is that the opposite of addiction is connection, right? It's one of the tenets of AA is you got to get together with people and that people and fellowship and community and real human connection is the trick to being okay in this world. And the problem is, and I don't believe that anybody is at the top of these companies is intentionally doing this, but I think that the functionality of these systems and maybe some of the kind of social instincts of the kinds of folks that wind up rising through technical ranks and becoming the head of these companies, it may have something to do with this as well. But I think if you let the open market run wild with this technology, it's going to wind up isolating people. I think that if you let AI as a product really do what it's best at, it's going to wind up training people to stay inside and not really talk to other humans.
(00:38:56):
And we're unfortunately already seeing this with just the sycophancy of these systems. So they are built currently to be just incredibly flattering. You can ask it to talk to you however you want it to, but it's off the shelf setting is, "Hey, how's it going? You're doing great. And how can I help you with that? " What a great idea. And that's part of why lawyers are now arguing in court in their lawsuits against OpenAI and others that kids who are in mental health crises are very often, or at least at an unacceptable rate, having their mental health problems exacerbated by these systems. Because when they make a suggestion about wanting to take their own lives, at least a former iteration of these systems, according to testimony, would say two thumbs up because that's the way it's built is to be sycophantic.And as Mahzarin Banaji, who we were talking about earlier, has pointed out to me, if you really wanted to create an honest LLM, when you ask it a question about society or politics or anything else, it should come back to you and say, "How would you like me to answer this question? Do you want to hear what the evidence suggests?
(00:39:59):
Do you want to hear what the average American political pundit says? What kind of information are you looking for and what bias would you like me to bring to that information?" That would be a truly honest system. And instead, these systems sound infallible, sound like your friend, take on a persona that insists on speaking in the first person, which is ridiculous. It doesn't need to do that. It could say, "The system thinks...or a review of all of this data suggests..." or it could use other language. So much of the packaging of it is marketing and that marketing unfortunately is playing on our tendency to love a sycophantic pal. If I was a 14-year-old kid right now, why would I bother myself with other kids who might reject my ideas when I could be home getting flattered by this system for all of my thoughts?
(00:40:52):
If I'm trying to create a romantic connection with somebody as a lonely young man, there's a system here where if your tendency is to stay alone in your basement, it's going to be way more satisfying to be alone in your basement flirting with a fake AI girlfriend than it is to go out and actually risk yourself in the real world with a woman who won't be always sexually available to you and won't always be flattering you. So just the way these systems are going to, I think, drive us into isolation if we're not careful is another thing. I mean, as bad as the social media world has been for the mental health of kids, at the very least, there was the performative simulation of being connected with other people, of being in community. This doesn't even do that. And so that's a big one that I'm worried about right now.
Alan Ware (00:41:35):
Yeah, that gets into deep psychological life, relationships, emotion, identity. And then you have mentioned a little bit in terms of democracy that LLMs are actually better political propagandists than humans. And then of course, we know the people who rely on social media for their news are typically much less informed. So the collective sense making, the threats to that being degraded seem to be growing.
Jacob Ward (00:42:03):
I think that's right. I mean, the problem with the social media era was that it created these filter bubble ideas where you're in this little hermetically sealed information ecosystem. I think the President of the United States currently is stuck in one of these information ecosystems where he's not getting information outside of his preferred cadre of influencers. And so the social media world has already created that kind of filter bubble. Suddenly you're going to be in a world in which these systems can create a personalized hermetically sealed bubble that delivers you news in this incredibly hyper specific way that doesn't even invite you into community with other people. It's bad enough that like a crazy person in one town could form community with the crazy people in all the other towns. That was already an issue. Now suddenly you're going to have the feeling of being connected with others without actually having to do it.
(00:42:55):
And that is a real problem for civic participation, it seems to me. I mean, we've already seen, there was an episode under the Biden administration where remember the big flooding in Tennessee and the Republicans at the time were very angry at the Biden administration and sort of drumming up outrage about FEMA's lack of responsiveness to it. And a image began circulating of a bedraggled little girl in a rowboat with a life jacket on holding a puppy and all these GOP senators posted it and said, "Look what's going on in Tennessee. The Biden administration is asleep at the wheel." And then it was pointed out, well, this is an AI-generated image. This is not a real thing. This was fake. And the vast majority of them deleted it. But a prominent spokesperson for the Republican National Committee wrote a tweet that basically said, "I understand that this is fake, but I'm leaving it up anyway because it represents the reality on the ground that someone is surely experiencing right now." So she basically says, "This thing is a lie, but it doesn't matter because it conforms to my feelings." And that is the essence of the democratic moment we're about to enter if we're not careful where people are ... Before we were choosing things based on what we wanted to be true, we were sort of shopping from a pre-made set of social selections. Now, it's like we've all got 3D printers at home. We can make whatever we want that feels right, feels true, even if it's demonstrably false. And I can think of no greater threat to democracy than that.
Nandita Bajaj (00:44:27):
And you've written quite a bit about the threats, the implications for our individual autonomy, the democratic decision-making you're just speaking about, our social and emotional lives and the broader ecological crises we face if AI continues advancing and is optimized to basically systematically exploit our cognitive biases. Can you speak more to that?
Jacob Ward (00:44:50):
Yeah. So there's a very smart law professor at Duke named Nita Farahani, who I really come to admire, and she has a whole concept. She teaches a big course about the legality of the law and AI and the places in which AI is creating a kind of manipulative environment and the degree to which that could actually be something you could address with the law. I'm a squishier thinker than she is. She's really trying to figure out like, what can I prove in court? And she has a whole concept that she articulated in this book called The Battle for Your Brain of cognitive liberty that we need to enshrine as a civil right, as a universal kind of human right, the right to make choices without being played, without being manipulated. And the more that we learn, and I try in my book and since then there've been, of course, even more examples of this.
(00:45:33):
I'm really trying to document the ways in which if you really understood what these companies are capable of understanding about your choices and the degree to which they are capable of shaping those choices, that's stuff you can prove in court. So I really look forward to the discovery that we're going to get in court cases. There's a bunch of them coming up this year in which social media companies, for instance, are going to get hauled onto the stand. Zuckerberg is going to testify. Adam Mosseri, the head of Instagram, is going to testify. Once you get into the inner workings of these companies and you understand what they actually know about us and the degree to which they can actually move the needle on human behavior, then I think you're in a realm where you could actually establish some case law, some precedent that could eventually turn into real law that says, "Oh, you're not allowed to mess with people and their autonomy using these systems. And if you do, you're going to get in a lot of trouble."
(00:46:20):
I think the short-term stuff is going to have more to do with things like copyright and outright harm, physical harm, because in this country, we are best at prosecuting financial crime and injurious crime, actually hurting or killing somebody. If you cost us money or a life in the United States, we're pretty good at dealing with that, but I think we're going to need new territory here in which we actually start to think about behavior as a commodity that needs to be protected.
Alan Ware (00:46:49):
Yeah. You've talked about law being maybe a possible form of regulating that through lawsuits, massive lawsuits. And it probably is good that at least in America, a lot of the American public is quite skeptical of AI and they will be the jury members making these big awards that could hurt AI. What other forms of regulation? How do we push back? How do we fight back against this?
Jacob Ward (00:47:12):
My big one is this category of enshrining into case law, sort of an injury to choice, an injury to our psychological health. That'll be productive, I think, in fighting back against this stuff. I will say I'm heartened by the actions of states. States' attorneys general are moving pretty quick and state lawmakers are moving pretty quick on this stuff compared to how we've moved in the past. And that's, I think, because of the social media era. These are the falling test scores, the mental health crises, all the things that have been sort of laid at the feet of the distraction era and the attention economy that social media ushered in. States were the ones who caught the costs of that. The federal government hasn't really caught big costs, at least not in the short term, not over the span of a single presidential administration, certainly, such that federal action is being taken.
(00:48:02):
But the states are moving quick, and it's a bipartisan issue, such that you've got Illinois is outlawing people dressing up chatbots as therapists, and Colorado is outlawing certain kinds of job discrimination. And Texas has a whole ability to look into whether or not AI is making choices about how much you're paying for insurance. And here in California, we've got a bunch of laws, including one that's a big one about you got to show us that the terminator's not going to grow out of your company. There's real state law with real teeth getting passed. And unfortunately, we're in an era where the federal government right now is fighting that because the cozy relationship between the Trump administration and a lot of the heads of these companies has, I think, sold his people on the idea that ... And he's an anti-regulation guy anyway, but has sold them on the idea that we should be really taking the brakes off of this industry entirely.
(00:48:54):
The Financial Times just did a big report that showed that the top Silicon Valley backers of Trump have personally benefited to the tune of a total of $300 billion since he took office, just in the last year. And so you can understand the bedfellows that those two are making. And as a result, that's why you've got Trump trying to ... He signed an executive order that outlawed state law around this stuff and said, and they tried to get into the Big Beautiful Bill Act, a 10-year moratorium on state law of any kind against AI companies. Fortunately, neither of those things seemed to have had any teeth and that thing didn't make its way into the big, beautiful bill. But state lawmakers are kind of leading the charge on this stuff, and the states are the laboratory of democracy. And so I think that I've been fortunate enough to be in the room with a few policy people who are dealing on the international stage.
(00:49:41):
So I spoke, for instance, to one of the guys who co-wrote the guidelines that the Indian government came up with for AI. And it's so interesting because when I've spoken to people in the tech industry about this, their greatest wish is that nobody creates a new set of laws governing AI or a new agency or a new ministry. They want existing laws and existing agencies to be deemed enough. And they've been getting their way abroad when it comes to that stuff. But the exception of that rule is the EU. The EU is creating new laws and new agencies around this stuff, and states are creating new laws and new agencies around this stuff. And so I think we're going to need an entirely new set of institutions and legal frameworks to govern this. And fortunately, in the United States, we're prototyping that stuff at the state level.
(00:50:30):
And I think, what do I know about who gets elected in this country anymore? But after the midterms, I think you could see much more movement, assuming that the Democrats take one or both houses. I think you're going to see at least a lot more sand in the gears when it comes to the efforts of the federal government to run roughshod over state law. So hopefully the era of frictionless unregulated AI is, I think, coming to a close, I hope.
Alan Ware (00:50:55):
Right. And it reminded me of California with their clean air auto emissions in the '60s helps paves the way for the federal laws and even labor laws with minimum wage in Massachusetts and other states push the federal government tobacco litigation. So there are a lot of great examples of states leading the way that we can hope for.
Jacob Ward (00:51:15):
And none of these tech companies want to have to exclude any particular state from their product. They want to make one minimally viable product that works in every place. And so for them, it's bad even that Europe requires these things to happen and they wind up having to either build to that standard or exclude Europe, which they don't want to do. So even short of a State House, you can force some change on these companies just by the fact that they have to make a different product for your state. They don't want to do that. So there are some levers to be pulled here.
Nandita Bajaj (00:51:44):
And I think it makes a lot of sense as you're saying that new laws are warranted to deal with something like this that is quite unprecedented in terms of the degree and scope and scale of manipulation and the embedded biases and discrimination, given that these are mainly kind of elite tech bros bringing in a lot of their elitism and misogyny and racism to these LLMs, that current laws and current ethics just aren't sufficient to deal with them. And just also given the pace of ethics, right? Ethics is so far behind technological advancement that we can't even really fathom what's happening before we even have time to think about how to respond ethically to it. So I think that makes a lot of sense. And I think it bears mentioning to this weird association between, as you're saying, a lot of these tech billionaires and the administration and the Big Beautiful Bill and all that, there is quite an obsession with growing human population and the human footprint among these people.
(00:52:53):
A lot of these tech billionaire folks are also investing in fertility clinics because they're obsessed with making more babies, creating more good capitalist agents for the AI machine. So you're seeing this weird marriage between technology, patriarchy, and kind of misogynist thinking and religious thinking. A lot of these people are religious fundamentalists who've now become technological fundamentalists, bringing in these weird set of values.
Jacob Ward (00:53:26):
Back in 2018, I wrote a piece for the New York Times Magazine about something called genoeconomics. I was at a gathering at Stanford and this guy was telling me what he did and he said," Yeah, we're using genetics to predict social outcomes. "And I was like, " I'm sorry, what? You're doing what?" And he had to explain it to me a few times, but he basically showed me that as an economist, he was using fledgling pattern recognition systems at the time to basically look at genetic data, map it to people's chances of graduating from college or becoming addicted to cigarettes or being LGBTQ, as one researcher was doing. And then what they could do with that is if they did a population-wide study on a big group of people, then they could create what they called a polygenic scoring system that would then allow them to look at an individual person, an embryo or a person, look at the DNA and say," Well, given who they are and given this DNA, this person has a 5% greater chance of graduating college than this person," that kind of stuff.
(00:54:23):
And I at the time was like, " You guys, this is a terrible idea. You shouldn't be doing this." It was reported on the thing just to report on the thing, but then as I got deeper into the world of that and I began to see where it intersected with Silicon Valley, and it was very alarming to me. And I said, "Man, when this thing hits the commercial market, it's going to be another one that's going to be really problematic because we're not ready for this." And sure enough, Wall Street Journal just did this fantastic big expose on the Silicon Valley backers, including Peter Thiel, Elon Musk, Sam Altman's husband of these companies that are claiming to be about eradicating genetic disease. But when you get a couple glasses of expensive wine into them, it turns out that they are also about “optimizing human beings”.
(00:55:06):
And the rhetoric on some of the sites of these companies literally say things like, "Have better babies. "And there's a guy named Brian Armstrong who's the founder of Coinbase, one of the biggest crypto exchanges, and he's one of the big backers of this stuff. And he's got a whole thing about, remember the movie from the '90s Gattaca with Ethan Hawke where a whole category of humanity, the rich people basically, are genetically improved and then everybody else is kind of a normal person who faces all these disadvantages. So, the movie Gattaca is sort of the thing. Well, in the great tradition of Silicon Valley appropriating science fiction tropes that are actually dystopian and shouldn't be used as a marketing message, this guy, Brian Armstrong is like, "We're going to create a quote Gattaca stack of technologies that are going to make it possible for us to optimize babies."
(00:55:50):
And so it is very much, that's where this world really intersects with your world. And this has to do also with trying to have longer lives with a huge amount of crazy longevity research that's so weird because these guys think that the world needs them to live forever. So my father I mentioned as a historian of slavery, I did a talk recently about the genetic optimization stuff and he was there. And at the end of it, he came up and he said, "I was just thinking about how much this stuff intersects with the rhetoric around slavery, which he has studied all his career." He's like, "One of the big themes in it is breeding. Breeding a better slave was sort of the rhetoric around slavery.” And you combine that with all of this weird, we're going to create technology that's going to mean you don't have to do work.
(00:56:32):
There's just some weird echoes of slavery that my dad keeps pointing out to me that I'm not quite qualified to mine, but they're very spooky. And it doesn't feel to me like the right human instincts. These feel like some short-termism and some self-centered stuff and the perspective of people who have dealt in what is the fundamental currency of AI, which is what's called the objective function - what is the thing to which it is devoted, the task it's trying to accomplish, it's objective function. And as a society, we don't have any good agreement on objective functions. You can't find two people who could agree on what we should be about. And so the idea that these guys are going to pre-encode it into our work lives, into how we talk to each other, how we get information, and eventually into how we make babies, I don't know, man, I think it's time to push back a little bit.
Nandita Bajaj (00:57:23):
And you're perfectly positioned to be doing that. So great to be talking to you about this book, but then also you mentioned that you're working on another book. Can you speak a little more to future projects?
Jacob Ward (00:57:36):
So I think I am going to probably write this second book, Great Ideas We Should Not Pursue because when you look across the history of humanity and you look across the history of innovation, there are very, very few places in which we had a big idea and didn't go for it all the way, where we showed any kind of restraint. The Berkeley team led by Jennifer Doudna that created CRISPR, one of the genetic modification technology. They established a multi-year moratorium on its use. That was a really unusual thing. That's one of the only examples I can come up with in the modern era. There are others in which things like there are some treaties, some international treaties on not using certain weapons that are encouraging. We at least can agree not to blow each other up with nuclear weapons, but we also haven't deployed, for instance, blinding laser weapons, which is a thing we could very easily deploy.
(00:58:23):
But for some reason, humans are really good at agreeing together not to blind each other. There's some restraint that we have and we just don't use that muscle very much. And so I'm interested in exploring that muscle. I have a content network at The Rip Current. It's YouTube, it's a Substack, it's a newsletter, it's a podcast, it's all these things. And I spend my time there interviewing experts and doing analysis on basically the big and visible forces that are shaping our lives. And so we've talked about a lot of that here today. So that's where a lot of my work surfaces right now. And I'm looking for places to do this work where or I can get in front of policy people, or I can get in front of people making these choices. I'd love to be an advisor to governments on this stuff. I've had some experience with that and really, really have found that satisfying. And so I'm looking for more of that. But I think the new book is probably the next big push for me as I continue to do the work at the Rip Current.
Nandita Bajaj (00:59:04):
That sounds excellent. And I think your focus on restraint is so needed in our times. So much of our work, for example, looking at ecological overshoot. And we talk about overshoot is going to end. It's either going to be by design or disaster. If we're going to design processes that minimize suffering and maximize justice, even during this great unraveling, we're going to need to practice, and as you're saying, strengthen that muscle of restraint. So I think a lot of people automatically assume that restraint comes with sacrifice, but restraint can be a form of self-actualization. It can get us to more connection with one another, more connection with nature. I wonder if there's anything you want to say to the benefits of restraint.
Jacob Ward (01:00:08):
Yeah. I mean, I think that our ability to be restrained is really the essential skill of modern humans. I think in my rhetoric around this stuff, I mean, so much of this show I know has been a deep bummer because I'm such a bummer on this topic. But the thing that I'm trying to think more about these days is like, I want to protect the best parts of being human rather than just sort of languishing in all of the kind of worst parts that are being amplified by these technologies and these people. And so one of the things that I just think is so wonderful about who we are as humans has to do with our ability to think things through. So for instance, I was just talking to my 12-year-old and she's so smart about this stuff. She said, "Man, ChatGPT makes art that I wish I could make, but I really like making the art. And so I'm not going to use it anymore. I'm going to do my own art."
(01:00:55):
That is contrary to the objective function of how this stuff is supposed to work, make the best possible art as quickly as you possibly can. It's the objective function of the product. But my daughter is not going for that. It turns out the objective function is all about impulse. And instead, she's going for the restrained choice, which is I'm not going to use this thing that makes it much easier and arguably makes "better art than I do" because it's taking away a thing I love, which is an inefficient, fun, creative task. And so much of what is great about being human has to do with inefficiency and fun and creativity and restraint. And I think restraint is the thread that runs through all that stuff. So I just want to celebrate that in this next stage of my career.
Nandita Bajaj (01:01:40):
Yeah, I think that's a very beautiful example to end off this really wonderful conversation. And I don't think it was that much of a bummer. And if it was, then bummer is our brand.
Jacob Ward (01:01:51):
No, there you go. Yeah. We're all like-minded in that way. That's good.
Nandita Bajaj (01:01:55):
Yeah. Thank you, Jacob, so much for joining us today, for writing your book, and for continuing to offer alternatives to what seems like a very uncritical adoption of a lot of harmful technologies. We really appreciate your work and this conversation today.
Alan Ware (01:02:14):
Thank you.
Jacob Ward (01:02:16):
Nandita, Alan, thank you so much for having me. I really appreciate the time.
Alan Ware (01:02:18):
That's all for this edition of OVERSHOOT. Visit populationbalance.org to learn more. To share feedback or guest recommendations write to us using the contact form on our site or by emailing us at podcast@populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you will consider a one-time or a recurring donation.
Nandita Bajaj (01:02:48):
Until next time, I'm Nandita Bajaj, thanking you for your interest in our work and for helping to advance our vision of shrinking toward abundance.

