Humachines, Big Tech, and Our Future

A dystopian fusion of human and machine is being pushed on us by a big tech elite. Michael D.B. Harvey, author of The Age of Humachines: Big Tech and the Battle for Humanity's Future, warns of the 'humachinator' worldview that weds unrestrained technology and capitalism - and what we might do to reclaim a future rooted in democracy and ecological balance. Highlights include:

  • How the 'humachine' blurs the line between human and machine, technologizing everything and everyone;

  • How the history of scientism and empiricism has led humachinators to imagine the brain as a computer and the body as a machine and the belief that engineering can control humanity, biophysical laws, and even death itself;

  • How big tech oligarchs merge unfettered science with unfettered capitalism to produce 'ultrascience';

  • Why big tech oligarchs' faith in unrestrained technology and markets has merged into 'ontocapitalism' - a form of capitalism that commodifies nature and all human experience;

  • How humachinators use 'tricknology' to hype their technologies and get us, especially the young, addicted to their products;

  • What the five types of humachination are: cognitive, emotional, relational, the mechanized human, and a totalizing daily environment where our lives are surveilled, interpreted, and mediated by machines;

  • How the extreme individualism in Silicon Valley undermines democracy and collective decision-making;

  • How the 'G' word, growth, is behind all the humachinators' actions and dreams;

  • Why our relationship with technology is ultimately political, not inevitable, and that we need to resist big tech oligarchs who profit most from unrestricted technology;

  • Why we need to move from CIMENT values (competitiveness, individualism, materialism, elitism, nationalism, and technologism) to CANDID values (cooperative, altruistic, non-materialist, democratic, internationalist, and deferential to nature) - and how we might shift those values.

MENTIONED IN THIS EPISODE:

  • Michael D.B. Harvey (00:00:00):

    It's almost as though capitalists have realized we've got to a point where either capitalism has got to change or humanity's got to change. And capitalism's saying, well, it's not going to be us. Humachination is the ultimate dehumanization, that basically the machines provide a better model for life than life itself and represent a higher form of being than anything that has been produced over the past three and a half billion years of life on Earth.

    Alan Ware (00:00:33):

    That was writer Michael D. B. Harvey, author of The Age of Humachines: Big Tech and the Battle for Humanity's Future. In this episode of OVERSHOOT, we'll explore how the world's most powerful forces are leading us into a human-machine fusion that endangers humans and the planet and what it might take to resist and break free from the age of humachines.

    Nandita Bajaj (00:01:04):

    Welcome to OVERSHOOT, where we tackle today's interlocking social and ecological crises driven by humanity's excessive population and consumption. On this podcast, we explore needed narrative, behavioral, and system shifts for recreating human life in balance with all life on Earth. I'm Nandita Bajaj, co-host of the podcast and executive director of Population Balance.

    Alan Ware (00:01:30):

    I'm Alan Ware, co-host of the podcast and researcher with Population Balance. With expert guests covering a range of topics we examine the forces underlying overshoot - the patriarchal pronatalism that fuels overpopulation, the growth-obsessed economic systems that drive consumerism and social injustice, and the dominant worldview of human supremacy that subjugates animals and nature. Our vision of shrinking toward abundance inspires us to seek pathways of transformation that go beyond technological fixes toward a new humanity that honors our interconnectedness with all of life.

    (00:02:07):

    And now on to today's guest, Michael D. B. Harvey is a London-based organizational psychologist and writer. In the 1980s, he was an entrepreneur in high-tech innovation, pioneering online interactive services, which the internet would eventually turn into mainstream activities. In the early 1990s, disappointed with the capacity of new technology to ameliorate human behavior, he trained as a psychotherapist and organizational psychologist, and since then has worked with many leading organizations across Europe. He has degrees in English literature, sociology, psychology, psychotherapy, and organizational psychology. His books include Interactional Leadership and Utopia in the Anthropocene: A Change Plan for a Sustainable and Equitable World. His latest book is The Age of Humachines: Big Tech and the Battle for Humanity's Future. He is also an eco-political songwriter whose songs can be found on all major music sites. And now on to today's interview.

    Nandita Bajaj (00:03:09):

    Hi and welcome to OVERSHOOT Michael. It is so great to have you here.

    Michael D.B. Harvey (00:03:13):

    Great to be with you, Nandita and Alan. I've been looking forward to this for quite a long time.

    Nandita Bajaj (00:03:18):

    As have we. And your recent book, The Age of Humachines: Big Tech and the Battle for Humanity's Future, raises deep and disturbing questions about how much of our humanity we're outsourcing to machines and how far we might go. The Humamachine age under the misguided autocratic leadership of techno-capitalist utopians threatens to move us even further from being the embodied biological creatures that we are, capable of understanding and embracing ecological limits. And we've really appreciated your deep dive into this ideology, but also how you offer a path toward a more ecologically sane and socially empowering way of living in balance with the rest of life on Earth. So we're really excited to chat with you about all of this just as we are seeing the acceleration of AI and techno-solutionism within our current mainstream narratives.

    Michael D.B. Harvey (00:04:25):

    Yes, indeed. I mean, really since the book was published last year, since the book was effectively finished maybe two years ago, everything seems to have gone into overdrive, but in a quite predictable way. But nevertheless, it's extraordinary the pace at which this change is coming. And of course, the Trump presidency has massively accelerated that. It might've been different had Biden or Harris got in because they did have a regulatory agenda. Whether they would've enforced it is questionable, but that's all out the window now.

    Nandita Bajaj (00:04:56):

    Totally. So yeah, we can jump right into your book, The Age of Humachines. To start, can you describe in broad terms what a humachine is?

    Michael D.B. Harvey (00:05:07):

    Well, essentially a humachine is a humanized machine or a mechanized human. It's part of a process of what I call humachination, which really aims to narrow the gap between humans and technology until it's virtually non-existent. In a sense, it's the technoligization of everything and everyone everywhere. And its deep history I think goes back to the beginnings of the western science revolution, which we can perhaps talk a bit more about. It's more recent past, I suppose, is really the whole postwar era where we've seen the development of robotics, the beginning of the AI project back in the 1950s, breakthroughs in genetic engineering, the whole computer science revolution and all of that I felt exploded into life very conveniently from a historical point of view in the year 2000. Essentially is you got the dotcom crash in which capitalism was telling technology, yes, we like all of this, but you've got to find a way to actually make money out of it.

    (00:06:15):

    And that brought a big transformation, particularly in Google, which up to then had been just a rather brilliant research project, determined not to have anything to do with advertising. They wanted to remain a kind of academic neutral service. But when their investors threatened to pull out, they did a massive volte fast and suddenly cottoned on to the idea that they could actually reinvent advertising and make it a bigger, much more powerful and of course immensely more profitable thing than had ever been before, certainly for technology companies, and in that way, in some ways transform the prospect of capitalism to keep going a little longer. And part of that whole process is really the merger I think, of capitalism becoming increasingly concerned with where the next growth spurt is coming from as fossil fuel becomes more and more problematic, and it's clearly not got a long-term future, and the development of what I call ultrascience.

    (00:07:20):

    But this merger between a science, particularly on the physics side and bolstered and energized by computer science, which is worth remembering, really only started to exist as a formal subject in the 1970s. It's now the academic discipline, which practically every major entrepreneur, has now studied. And that in itself is quite interesting from a leadership perspective, how you have this extraordinary homogenous group which is united by total belief in capitalism, by its belief in ultra- science, that science should be allowed to just go wherever it can without accepting any kind of limits whatsoever, and a vision which ultimately transcends any notion of what ourselves as biological creatures or nature in any kind of generally understood sense as something that we are all part of and we all have to kind of work with. This is the transcendence of biology by physics and the re-engineering of everything.

    (00:08:24):

    And it is both, as I say, capitalist, it's economic, it's scientific, but it's also personal. It is a kind of personal vision that I think a lot of these people have. Obviously kind of Musk, Bezos, you can see it. They're obsessed with science fiction. They've grown up with science fiction. Bezos himself has said he sees himself as a builder. He's just there putting the great ideas that science fiction writers have come up with into reality. He's a complete Star Trek freak as well known. I think he even paid to be in one of the films. And at the same time, they've grown up with computer science, computer games, computer code. They tend to be prodigies, Zuckerberg was, people whose worlds transformed the moment they first hit that keyboard. Obsessed with computer games like Musk. Musk fancies himself as the best computer gamer in the world.

    (00:09:18):

    And again, that's an extraordinary kind of imaginary. I'm not a great gamer, but to be in that world where kind of anything is possible and anything can be engineered, and again, it's a parallel universe, which they are simply pursuing in the hope that some of those parallel lines meet with the real world and there is no difference. So there's a lot going on in the whole humachination project, which makes it extraordinarily powerful, extraordinarily difficult to oppose, very, very difficult to direct into better directions, but potentially it is the most extraordinary transformation in being human on planet Earth. So that's why I got interested there.

    Nandita Bajaj (00:10:06):

    Thank you for that explanation. And as you've captured at the very heart of this humachine worldview is transcending biology and ecological limits and not seeing humans as these organic meaning-making beings that are embedded within ecosystems and cultures, but basically as machines that are waiting to be optimized or transcended. And you started going into the concept of ultrascience, and you've looked at the combination of neomechanicalism as a form of science and you call that ultrascience. Can you unpack these concepts a bit more and how they are related to one another?

    Michael D.B. Harvey (00:10:51):

    I mean, ultrascience I just think is that belief you could argue is perhaps it's always been there in science of just thinking that there's no frontier which can't be crossed. And that's the mission is to understand more and more of the universe and reality till I guess we understand everything about it. I mean, that's the kind of one element that's always been there in science, which is often referred to as scientism. It's basically going over the top in the sense that thinking that you can understand absolutely everything about reality to the point where as positivists believed, particularly in the end of the 19th century, that you could one day predict reality. You could simply put everything into a machine and calculate in some way. We would know everything about everything to that point. So I mean, those are elements of it, but I think in some ways, as you say, the machine figures massively in all of this.

    (00:11:45):

    And the beginnings of that is really the western science revolution, the late 1500s, early 1600s. And that whole idea which came from, if you remember that quote from Johannes Kepler, the great astronomer where he says in 1605, his aim is to show that the celestial machine is not to be likened to a divine organism, but to a clockwork. And so this was the great eye of the metaphor of the world, no longer as something completely ineffable, mysterious, which we are part of, which we can benefit from, but can be very dangerous. But here it is, it's reconfigured as a watch and who makes watches? Well, human beings. And so God also is transformed, and this is still an intensely religious age. God is transformed from someone or something just beyond all human comprehension to a brilliant watchmaker, a very, very clever engineer. And so the idea kind of begins to form there, both in theory and in practice, particularly with Francis Bacon and the whole sort of empiricist science movement, which is explicitly placed now as mastering nature and in quite an aggressive way, a very sort of masculinist way of seeing nature as a female that has to be almost beaten into submission and enslaved and all her secrets teased or even tortured out of her.

    (00:13:12):

    The whole process begins of trying to understand nature and everything about the universe as a kind of machine. And of course crucially the human body and brain are then reconfigured, particularly with Decartes and Hobbes. The brain is then transformed into something which is basically a machine. Hobbes calls it a calculating machine, didn't come up with the word computer, but he might have, clearly that's what he thought of. And the body is then just a machine which responds to a whole set of wires and levers which respond to the inputs of the calculating machine. So that really as kind of archaic as it sounds, and ridiculous, has just been updated to the fundamental metaphor, which in many ways drives the whole of humachination, which is the brain is basically a computer. So we have a fair idea of how computers work, so we should be able to decode basically everything that the brain does.

    (00:14:17):

    And since the brain is just this piece of software driving essentially hardware, which is the human body, which as you say is embodied, is unique, is part of a cultural network which is itself unique every time, but instead of seeing consciousness as being embodied and the human body playing a huge and incredibly complex role in everything we do, including things that we think of as thinking and reasoning and decision-making, it's just reduced to this incredibly simple but very compelling metaphor. Let's face it, how many people really want to go too deeply into neurology or anatomy? And so that's one of the metaphors which was there from the beginning and which has kind of driven on this vision of humachination. But it's also very much a kind of logical extension of the physicists' early desire and the astronomer's desire to some extent, to master the universe, to master nature.

    (00:15:20):

    And it seems that to some scientists that's more and more possible. And of course, the whole data breakthrough and the development of that, genetics again, which lends themselves to some people, to the idea of the human genome, it's just a machine. It's a piece of code. All we need to do is just decode it and recode it, and we can do what we like and we don't have to bother with all that messy reproductive stuff and all of the lottery of how your kids come out. Because again, another of the key elements in the humachine vision is immortality or is life extension, is fundamentally re-engineering the human body to such an extent that death becomes almost an option, may be something that can be postponed for hundreds of years or maybe done away with. Again, it's a vision I guess humans seem to have had for a long time, immortality.

    (00:16:16):

    And generally it's kind of switched to the afterlife. It's perhaps an easier way of thinking about it, but it's another example I think of how humachinators are fundamentally anti-life, deeply reject the very premise of life because there is no life without death. I mean the two go together, they're fundamentally entwined. It's like day and night. And everything we know about life is really premised on the idea that it changes, but it's not around for very long, even for human beings who have a reasonable longevity. And to say that you are determined, as many of these humachine proponents are, to get rid of death is an indication of that kind of scientific view. But yes, other aspects are really just to reject any kind of sense that there are planetary boundaries, that there are scientific laws which are in any way permanent. They're seen as being temporary almost placeholders, which in one sense you might argue is okay.

    (00:17:24):

    I mean, take for example, the second law of thermodynamics, entropy, the idea that everything ultimately deteriorates, both in terms of energy winds down with entropy and randomness and disorder ramps up. And that of course plays a fundamental role in ecological economics over the past 50 years or so, and I think in a lot of ecological environmental thinking. Someone like Ray Kurzweil who's in many ways the sort of supreme philosopher of humachination, whose book The Singularity is Near came out in 2005, he just regards entropy as something that really can easily be dispensed with. And like other transhumanists he favors extropy, which is basically getting bigger and bigger and bigger and a exponential law of everything where particularly as far as technology is concerned, which he sees as not only a scientific law but a economic law based on Moore's law of just things get bigger and bigger and faster and faster.

    (00:18:27):

    And of course the pace of technology these days gives a certain amount of credence to that. But yes, this idea then that any kind of law, for Kurzweil the speed of light is also just a rather temporary inhibition, which we'll soon overcome. The problem is in assuming that all of these laws are going to be broken, so you can already more or less get rid of them. You can leave them out of your thinking, which is essentially what ultrascientists do. So this sense of no limits science, which is just going to go on getting bigger and bigger and more and more powerful until yes it reaches that point where we know everything about everything and then can do everything. And in that world, a kind of species of humans with computerized brains or brains that are essentially computers or software programs uploaded into the cosmos, all of that kind of utopianism becomes easy stuff.

    Alan Ware (00:19:26):

    And as you've noted, a lot of the humachinators of the modern tech elite have an abiding faith in both capitalism and technology that have rewarded them handsomely with money and social status and power, and they see a sort of linear progress model of human civilization, and they're on the cutting edge of leading humanity forward. And you've turned the most extreme human machine proponents' version of capitalism is something called onto-capitalism and tricknology as a key form of marketing the techniques of this version of capitalism to the public. Could you give us an overview of your view of onto-capitalism and the tricknology you see as being used to sell that onto-capitalism?

    Michael D.B. Harvey (00:20:13):

    Yes, onto-capitalism, ontos from the classical Greek for 'being', it's really just a type of capitalism, I think the ultimate type of capitalism which has gone from transforming all human relationships, which of course earlier forms of industrial capitalism did in terms of mechanization of labor, the commodification of relationships, the commodification of nature and the environment to essentially the transformation and commodification of everything, of being. So the aims are completely unlimited, really. It's almost as though capitalists have realized we've got to a point where either capitalism has got to change or humanity's got to change. There's come a crunch point and capitalism's saying, well, it's not going to be us. We've made our choice there. If humans need to change to such a point where actually all labor is automated, which is fundamentally the goal, Elon Musk has said, Larry Page of Google has said. Basically, for them it's axiomatic.

    (00:21:22):

    All jobs will be done by AI or robotics. And they're thinking in terms of not even decades, years almost. We have no idea really what that world looks like, but that is one of the most fundamental transformations of what it is to be human. One thing we know about human beings is yes, they're very good at making tools, but those tools are used in the very, very practical sense of earning a living, of societies are built in that way. If you were actually going to transform every single form of work, paid labor as it is at the moment, what does that do? It's almost impossible to comprehend. And of course, AI and robotics and all of these different types of human machine are not just interested in changing jobs. They're changing everything that we do in every human activity, every form of creativity, every form of emotional relationship, every social relationship, every caring activity, every aspect of the body and the brain and so on.

    (00:22:26):

    So that's why I use the word ontocapitalism to really suggest that that's something that in some ways is bigger even than Marx thought was possible. He saw the automation of labor, but he didn't see the automation of everything. And as you said in your introduction, Nandita, I mean this is fundamentally the idea of ontocapitalism, ultrascience is what I call machine supremacism, that basically the machine provides a better model for life than life itself because it's totally controllable and by whom starts to get rather complicated. But that fundamental idea that machines represent a higher form of being than anything that has been produced over the past three and a half billion years of life on Earth and certainly over the past few million years of the development of the Homo genus, let alone just old Homo sapiens.

    (00:23:23):

    So tricknology is really just a whole series of ways I think, of marketing technology. I mean, having been a technology entrepreneur myself back in the 1980s, I'm kind of familiar with some of the basic principles. The thing that interests me in a way about it is that unlike conventional advertising and marketing where a lot of the stuff that you are trying to get across is fairly static in the sense it's already developed. You've got your baked beans or your soap powder, and it is what it is. And in fact, if it's been around pretty well unchanged for a long time, that's probably an asset you want to get across. Whereas new technology is always in development. And what you are doing to some extent in selling it, and remember you've got to sell it to investors before you ever get to sell it to consumers generally is you're selling a vision of the future. You are asking people to come on board for something which is developing, which probably isn't very well developed.

    (00:24:22):

    And yes, there are all sorts of ways in which that's done. Some of it is the kind of performative stuff that we get in terms of launches. And Steve Jobs really was the first to get onto the idea of releasing your latest edition of a phone or a laptop could be compared to a Hollywood blockbuster and going for all of that razzmatazz and so on. But I think what interests me is the way all of that tricknology leads to what I call technologism, which is just this idea with a kind of religion. It's a very modern religion, a modern philosophy, which basically equates human progress with the progress of technology. And essentially that's increasingly where we are. We are just thinking, well, we don't know what comes next, but it seems to be an improvement on what we had the last time. Do we really need a new iPhone?

    (00:25:11):

    Probably not. And this is the whole kind of direction I think, of the modern economy. No one actually knows how all of these things are supposed to work out. What happens when you get in mass automation of jobs? What happens if you continue with robotization? A lot of these changes, there's attempts at regulation. But what all that regulation is really doing is trying to catch up with what's already happened. But we buy into it. And in a sense, we're a bit like the poor kids in the Pied Piper of Hamelin narrative. There's something magical about tech and we can't entirely deny that. But where we're going, who knows? No one ever found out what happened to those kids. And also it's kind of relevant because of course it is the young in a sense, who are leading so much of it. And I do write about that quite a bit, that it's the young selling to the young. It's young men often in their early twenties, sometimes not even waiting to finish college Mark Zuckerberg style in their teens.

    (00:26:20):

    And that interests me from a risk point of view, because who is the most risk-ready, risk-happy, least risk-averse demographic on the planet? It's young white men. Females tend to be more risk averse than men. Whites tend to be more risk tolerant than non-whites. And the young tend to be much more happy with risks than the old. And who are the people who are adopting most fervently, most unconditionally, uncritically and imaginatively these things that tend to be the kids? You look at all the mobile phones, the smartphone, the texting. That's in a way, it's the ultimate technology or the ultimate technique where you don't really have to do anything, you just have to get it to kids. And of course, all of these social media platforms are very much targeted at kids, the marketing teams are bonused on their ability to increase the number of scrolls that kids and others are actually using.

    (00:27:30):

    So that's the easiest trick of all. But I think the biggest trick which is being played on us is just this idea of follow the technology. We are technology and that's all that matters. And that, of course is the most dangerous trick of all because actually all the decisions ultimately made around technology are political decisions. It's not the technology that makes a decision, although we are actually heading in that direction. It tends to be those who own the technology and who benefit from it. And as you know in my book, I look at that in the past, going back to agriculture and so on. It's always those who benefit most from a new technology who tend to push it. And of course in a pyramid society that is quite difficult to oppose for most of us.

    Alan Ware (00:28:19):

    And that tricknology is also happening it seems, in the financial markets with the hype that AI is getting. And now that the top 10 S and P companies are a higher part of the S and P than the dot com bubble. So you've got Meta, Alphabet, Microsoft, Amazon, all of them hugely highly valued. They're selling a lot of the hype around ai and none of those models are profitable yet. The company making money is Nvidia who's selling the chips to Meta and Alphabet and Microsoft. And it reminds me of talk of the California gold rush that the people making money were the suppliers of picks, shovels, food, packs, not the gold miners themselves. So there is a tricknology that could be played on the whole society in that sense and government. And half of US GDP growth right now I've read is AI related. So there's a lot riding on the bubble of that and we'll see where it goes.

    (00:29:14):

    Other bubbles did create things of value later, whether it was the railroad bubble or the dot com, as you said, when Google figured out we need to be an advertising company. There were ways ontocapitalism could find a way through the bubble, through the crash, but it seems like the technology and the hype has really worked for them so far, and they'll keep pushing it until they can't. But as you make clear too, the wildest dreams of these humachinators will always be reigned in by very real biophysical limits. And we've seen science fiction dreams of the fifties and sixties were way out of bounds with what actually happened with underwater cities, flying cars, space colonization in massive amounts. None of that happened. So what do you think the humachinators don't understand? They've been so heavily rewarded by society being unmoored from biophysical limits. There's always enough oil, coal, natural gas, solar, wind, whatever to power their dreams and enough copper and lithium and cobalt. But there are biophysical limits of both inputs and pollution to their dreams of these humachinator utopias.

    Michael D.B. Harvey (00:30:26):

    Yeah, I mean certainly biophysical limits I think hopefully will prevail in terms of the possible coming financial crash. There may well be huge resistance. I think there has to be huge resistance to job automation and those are things which could have a major impact. But I think that's difficult. Undoubtedly, we live in a world in which 8 billion of us, which has very, very distinct planetary limits, six of the nine, six already breached. One, ocean acidification, it is kind of on the brink. All of those are going to have disastrous consequences for the climate, particularly for billions living in the most vulnerable climate niches. Those are limits. The humachinators will tend to think in engineering terms, they'll think in geo-engineering terms. If there's a problem, there's a solution to it, a whole range of different ways of taming the climate. Bill Gates is heavily invested in some of those.

    (00:31:30):

    Of course, we've also got the British government more heavily investing very, very disappointingly in carbon capture and storage, which has long been the big technologilistic get out clause. And again, a very, very good example of tricknology, of a possible solution which is seen as a fact. It's what I call a mirage technology, as I'm sure CCS has been built into the IPCC climate agreements from the start as though it was actually there ready to turn on. Whereas 30 years later it isn't. And of course, so we have to question whether these biophysical limits and all of the kind of problems there are in creating mass renewables, all in terms of the natural resource limits as we switch from fossil fuel exploitation now to greater independence, as you say, copper and all of these rare Earth metals, but all of which of course China has been anticipating and working on for the past 50 years now.

    (00:32:35):

    So I mean that's a huge problem for western humachinators. But the question is, is that the way they see the world, do they see themselves being committed, I think particularly North American western big tech to this kind of big society in which there is some responsibility for the majority of the population? It's something I haven't talked about too much in the book, but I just see more and more that there is a kind of fanaticism about the beliefs of western big tech that will simply say, well, we go with what we believe in and any kind of environmental problems can be engineered away. And if they can't be engineered away, we go for what in any case is more like the libertarian solution, which is not the big state. Leave that to China. China has been doing big state for the past 2000 years or so, and that's the whole principle of the Chinese Communist Party.

    (00:33:35):

    For people like Musk and so it's much more let's live in a world with people like us, people who understand us, people who are techies like us, and yes, some human beings, some ordinary in a non-tech, they'll be needed for the human to human services, which are likely to be the jobs that remain in healthcare, in psychology, entertainment, sex workers. But I'm increasingly thinking they're likely to go in that direction. So yes. Does that escape climate breakdown? Well, I mean they'll go to the places north and south where they're likely to be most protected. It's not good news for Canada and one way or another there's almost certain to be a huge kind of push of migration to the far north. So there are those potentials of big tech, fortress colonies, private islands, the sea, they're very key on seasteading, maybe that comes to not necessarily underwater cities, but floating cities.

    (00:34:44):

    And I think it does come back to that key question of work. What happens if so many jobs are automated? Do we get to the situation for the first time ever where the rich don't need the poor to be what they are because again, rich and poor go together like life and death. You can't have one without the other, so far. Do you then get a world in which the rich are saying, we can have everything we want. We can have a kind of utopia. Why do we have to look after the 80% who we no longer need. Now that could be good for us, that could be good for the 80% because it gives us that opportunity in a way to go back to the beginning or go back to the drawing board. I don't mean without any technology because there would be so much knowledge of technology that it could be reproduced in terms of what we actually need, but it creates a situation where we could get to that position we desperately really need to be, which is to decide not what can technology do, how far can it go, but what do we want as a species?

    (00:35:55):

    What is the goal? And then we decide how technology can help. And if it can't help get rid of it. But we have this complete sort of reversal of the priorities in terms of what we should be thinking about, which again is a result of technologism as a result of the Pied Piper luring us into a world of ever better technology rather than saying, what is it we actually want as human beings. What satisfies us and where does technology, a very, very great skill that we have as human beings, which is tool making fit in with that. But what we can't allow is what's happening now is that one skill becoming so powerful that it essentially obliterates all other human skills because that's where we are going.

    Nandita Bajaj (00:36:46):

    And as you're saying this kind of western tech supremacy, a lot of these tech billionaires, as much as they may be limited by biophysical limits in a way, they're able to disregard them because the costs of those limits are not being experienced directly by them. They're being experienced by the 80% at the bottom, especially as you said, people in the climate niches where they're not going to be able to survive, the one to two billion that are expected to be displaced or perished this century, they are bearing the costs. And for these tech billionaires that's just a small collateral to continuing on their mission to re-engineer humanity. I mean the nonhumans don't even get a nod in their world, but the humans are kind of still seen as disposable humans. For them, what you're saying, this project of turning humans into humachines is a much bigger project than the mere billions that are going to die today. So the biophysical limits at some point it's going to come for them, but there's going to be a lot of destruction and annihilation of life on Earth before they see that. And you started talking about these different types of humachination with the emotional relief and with psychological and with robotization of humans. You've identified five types of humachination in the book. Could you give us a brief overview of each of these and we'll go deeper into some of those categories?

    Michael D.B. Harvey (00:38:30):

    I've really just tried to capture some of the dynamics I think of what's going on. So first type of humachine, I guess the best known really is cognitive humachines. So yes, anything that attempts to replicate human decision-making, other basic functions like reading, speaking, translating, I mean, all of these areas which generative AI is now working on pretty quickly. But we've seen a development of a lot of these areas for quite a long time now. And of course the big one, which we haven't talked about yet, is a AGI, which is artificial general intelligence, which is the original goal of the AI project, which is super intelligence, which essentially links all of these different types of intelligence, which we already have. And that is then able to do what the human brain is very good at, these kind of parallel processes, making connections, linking, while at the moment, we have super intelligent chess program that can beat the best in the world, but you or I could beat it hollow at checkers because it doesn't have a clue.

    (00:39:40):

    Try and have a conversation with the weather and you're not going to get anywhere with that. So the idea of super intelligence is in itself a crazy idea. It's the maddest idea really that humans have ever had is to create something that's more intelligent than the combined capacity of all human thinking. And what exactly that looks like, nobody knows. By definition we can't tell. Even if it was only twice as intelligent as us, we would be struggling. If it's a hundred times, a thousand times, a million times more intelligent, we would have absolutely no idea of understanding what this is, where it's going. Even if it explained things to us, we wouldn't have a clue. And it's a kind of recklessness, it goes back to what you're saying really about a complete disregard for limits of any kind, a complete dismissal of anything really to do with being human.

    (00:40:39):

    And certainly any sense that limits are actually something positive if you treat them in the right way. Something because the ancient Greeks understood to the whole notion of Icarus going too close to the sun, the whole idea of hubris, where you become too big for your boots and you pay the price for that. And I think everything we've been indicating here is suggesting that we're on the way to that, but at the same time, we can't necessarily rely on and say that it's going to work out well for everybody. So AGI being just the maddest idea and an idea, I think we need to really oppose. So that's one big crazy idea of what happens when you replace everything that humans do in terms of everything we can think about.

    (00:41:26):

    A second is emotional humachines, which claim to understand our emotions largely by tracking various biomedical indicators, stress levels, but also by scanning eye movements, other kind of tell body language, which we have. But that seems to be particularly effective, that it can do probably more than most humans can actually do in terms of detecting these tiny little movements. So the idea then is to be able to manipulate. Machines don't have emotions. If you are going to start introducing emotions into machines, then you really do have something quite weird. That's another dimension I guess. You really are creating another species. As it is, people think that robots could be another species and that should have the same kind of rights as human beings. But essentially these are incredibly powerful manipulative tools used to sell more to us, to tell what we're thinking. There are truth-detecting apps which now claim to be able to certainly better than lie detectors, which are not particularly good. They're increasingly being used by the police, by border controls and so on. And they become like all of these applications, even if they're only 70- 80% accurate, they become objective.

    (00:42:53):

    They become seen as the complete way of telling the truth because nobody can really understand how they work anyway. So it is not as though you can inquire. And that's one of the really awful things that I think the kind of skills that we're going to lose is even the ability to tell is your kid telling the truth. As a parent I'd say quite a lot of the time, probably not, but is your lover telling your truth? Is your friend telling the truth? I mean, I think quite quickly we'll get to an app and see, we'll just put it through the test and there you go. And that leads onto the really a third type of cue machine, which is a relational humaachine, which could be an app, could be digital, it could be robotic, where these machines try to take over from basically every human relation that you can think of -from caring relations, looking after the sick, looking after the young, parenting, your ideal girlfriend, your ideal boyfriend.

    (00:43:47):

    Online counseling is another one. Again, this is just software doing the work. Up to a point you might say there are certain things that can be useful about it, but it becomes very, very problematic once you start relying on these things, and particularly these kind of boyfriend, girlfriend, AI assistant who is also your best friend, your advisor, your counselor, even your lover, because they're developed in such a way that they're incredibly flattering. You are the only person, you are a kind of God. So you possibly turn to your AI because you're not really able to understand two-way give and take reciprocal relationships. And then you find comfort in an AI which always says you are right, which always builds you up. And they're kind of marketed as enabling you to grow relationships. So in fact, it's going to make you even worse in terms of your expectations of human beings.

    (00:44:42):

    So again, it's another sense in which these humachines are degrading human skills. So that things that we absolutely take for granted as human beings just we don't even think about them as skills or gifts or aptitude. I mean we notice when they're absent, people who lack any kind of emotional intelligence or can't read other people's emotions or whatever we know, but on the whole fairly rightly, I suppose, take it as a norm that people do have those skills, but all of those kinds of relationships are basically up for grabs. They're in the sights of the AI and robot developers.

    (00:45:21):

    The fourth jump in terms of the humachines is almost the biggest, and that's the mechanized human, which I've already kind of hinted at. And that's the idea that ultimately you can replace the human brain. If it's only a computer, you can find ways in which you can introduce electrodes, basically computerized systems into the brain so that you can actually link up. Then with the outside world, that's already being done. That's a big project which Elon Musk and others are pursuing. His outfit's called Neurolink. It's already experimenting on live human subjects. It has FDA approval to do that. And it has had some success in terms of helping people who've been paralyzed, being able to use these electrodes to essentially think control of computers and other machines. You can't necessarily disagree with something like that, but the danger is it's a kind of Trojan horse for what comes next in Musk's mind, which is expanding the computer power of the brain so that it gets to what Ray Kurzwell would say, 90% of the brain can be a computer as some people say, well turn the whole brain into a computer or piece of software. Why bother with any of that messy wet stuff? So that's one idea of the kind of mechanization of the human body.

    (00:46:45):

    And the rest are ideas around artificial organs. This is already quite a significant industry in its own right. You can imagine it's quite difficult to get organs for human transplants. So there's now artificial hearts, have succeeded in lasting maybe about six months in some patients, artificial kidneys, artificial organs, blood, skin, all of these are being worked on. The transhumanist dream there would be that you would have a completely replaceable body which would not only live for hundreds of years, but live in peak form. If it looked as though your heart was running down a bit, you just get an upgraded version. But of course it also means that the idea of you sort of existing at some kind of distance in one way from the environment disappears because you are having to be monitored all the time. You have this vast kind of network of services which you need to actually keep you going.

    (00:47:47):

    That's probably where the big money is in keeping up your constant 24 hour surveillance because if anything goes wrong well you're in trouble. And there's an interesting story that I mentioned in the book about an experiment, I think it was done in Australia where electrodes were planted in people's brains, essentially very, very simple things to remind them to take medication. And for some people it was really eerie and weird, and they said, I didn't know who I was anymore. It really started making me question my existence. For other people, it seemed to work, and for one person in particular said it was great, it changed my life. Obviously she had real problems. She was the person who experienced the problem with a company that was providing it going bankrupt. And she experienced that as sort of complete trauma. And again, it's one of those things where there is the sort of ill meeting in the sense between science and capitalism, which again is fundamental to ontocapitalism.

    (00:48:49):

    The problem is sometimes the scientists and doctors and so on are so caught up in their world that they're not really making the implications about what that means for other people. Ultimately, you are looking at the transhumanist dream, yes, of becoming a cyborg, being able to change your appearance, having super strength, getting taller or shorter or whatever it is you want. So there's a crazy kind of commercial capitalist logic in this whole thing as well. Whether it'll ever get to that I'm sort of dubious because as I said, I don't buy into the brain body metaphor, but you can be pretty sure there is going to be a lot of work done on it. It's going to depend on perhaps other willing subjects can be found. And maybe some of these things work, some of them don't. There's going to be probably an awful lot of tricknology employed as well.

    (00:49:44):

    Maybe quite simple changes or I mean, for example, life extension, there's a lot you can do just by absolutely world-class medical care. And of course the other area in this field is gene editing and genetic engineering. Again, the argument here, and it comes from some scientists, is why would you not try to improve the life chances of your children? Make them more intelligent if you could make them more beautiful or more extrovert or whatever personality characteristics you want. And the argument, there are lots of arguments. Of course we would say, well, once you start changing even one gene, what happens to everyone else? And these are then passed down the generations potentially. But I mean the problems are legion, but the counter argument is, well, if you are going to send your children to a private school or pay a fortune for private college education and all of those things, you're trying to improve the 50% of personality, which is probably environmentally weighted. Why can't you improve the other 50%, which is genetic, I mean obviously varies from person to person.

    (00:50:52):

    So that's the capitalist argument. And as a capitalist proposition, it's hard to defeat. And of course human cloning is the ultimate one there, but we're already getting animal cloning. It's legal in the United States. If your pet dies, you can have him cloned. That was actually mentioned, I think by one transhuman geneticist saying this is the ideal way to get people used to that idea. And that's quite a big thing of Ray Kurzweil is fairly desperate to clone his dad. There's quite a lot of dad cloning issues. And Larry Ellison of Oracle expressed it like this. He said, death, I don't get it. Why would you die? Why would it just end like that? And if you are worth 400 billion as he is, I mean I suppose you feel well, there's just a lot more to spend. And it is interesting just to go on to that because again, life extension tends to be something that I'm always surprised because people say, Hey, that's a good idea.

    (00:51:54):

    I wouldn't mind another 20, 50 years or whatever. And again, it's an idea you need to think of politically though, and imagine if you did have Larry Ellison or other tech billionaires living for 50-100 years. I mean it means they continue in charge of those massive technology conglomerates for another hundred years. I mean, all of these things which we're talking about in terms of are they possible in their lifetime? Well, you're now talking well into the 22nd century, and Jonathan Swift in Gulliver's Travels, which is one of the first in a sense, great science fiction works sort of had this covered in the Brobdingnags, who remember this island where people were immortal, but the immortals were absolutely miserable because over the age of 80, they had all their property taken away from them, all their legal rights. And they said, well, why? That seems to be kind of cruel.

    (00:52:50):

    But it was explained that if they didn't, I mean they would just live forever. They would take over the entire realm. There would be nothing for anybody else. And that's again, the way we have to think in holistic terms, in social terms, in political terms, in a way that Johnson Swift was actually quite conservative, but he thought in terms of the whole society, which big techers really do not. There is a degree of individualism about how big tech Silicon Valley people in general talk, which just goes off the chart and it's so far off the chart, really, that society doesn't really come into it. Society is not a concept of the kind of thing that I guess we environmentalists and progressives tend to have in the forefront of our minds just isn't there. Yes, it's a very American trait, but it's been taken to absolute extremes. So that Silicon Valley kind of unofficial philosopher-queen of Silicon Valley's Ayn Rand, the author of the Virtues of Selfishness and other works, which basically say there's no such thing as the public. There's no such thing as collective achievement. All achievement is individual achievement. So yes, I mean that kind of is a very long-winded way of explaining what the mechanized human could bring us to.

    (00:54:18):

    The fifth human machine is really all of that put together. But it is essentially a world in which you have a proliferation of cognitive human machines, emotional, relational, mechanized humans, all in this kind of acting intersecting environment. And it's almost impossible to really think of what that would be like. But I suppose in the book I try and see it as a kind of sequence of events or locations going from your smart home where everything is roboticized, where every surface is a screen which you can watch, but which also watches you, which is sense your moods, your emotions, perhaps changes color or music changes or other things happen as a result to what you are saying or what you are thinking, what you are feeling, trying to help you get through a difficult day, doing the cooking for you.

    (00:55:11):

    And maybe it's not humanoid robots, it's the robotic kitchen which is set up, it's the robot fridge which is telling Amazon, whoever it is to deliver whatever you need and so on. I dunno exactly what it is you're going to be doing, probably just gaming because we're going to see the world of gaming and virtual reality, augmented reality, getting so intense, so impressive with all this haptic stuff as well, so that you actually feel it and it feels you and you are going along and whatever it is on a space rocket. And you can actually feel it. Your body is going through and you might thought, well, what's the point of actually living outside that world? And anyway, what is there to do outside that world? So that's also a perfect manipulation chamber. It's just too easy then to manipulate people. If you do step outside that world, you are into, well possibly the automated workplace, probably the only humans there in robotic factories and so on will just be basically machine minders.

    (00:56:18):

    The only jobs really will be just looking at a screen and seeing nothing is going wrong. There won't be much else to do. You might, if you're younger, be going to the kind of automated schools which are basically online. Probably you're going to find person to person teaching may well become more and more expensive, more and more valuable, something that most people don't appreciate. But even now we've got these schools where various kind of apps are being applied to constantly track what the pupils are doing, looking into not only their schoolwork but what they're doing outside school and so on. So it's a completely manipulated environment when you get into the smart city. Well, the smart city has got, again, sensors and cameras, everything is kind of watching you and surveying you and possibly applying what I call smart eye. So smart eye is one of the aspects of this fifth level of machination, which is a kind of surveillance state which is able to basically conduct a kind of apartheid, which is much more on an individual basis where those who are for and against the regime are sorted out into separate areas.

    (00:57:34):

    And this is a bit like what's already happening in China in terms of the social credit system where if for example, you don't pay your debts or it looks as though you are engaging in politically nefarious activity, you will get punished in some ways. You might not get admittance to certain venues, you might not be able to get on a bus, you might not be able to get a ticket. Very, very easy to do when absolutely everything is automated and you can see that being expanded so that a complete kind of obedience to the regime will be demanded. And anything that looks like rebelliousness will then be punished. Forget about the separate apartheid benches which existed in South Africa. If you do anything wrong, you won't be able to get into the park, you won't be able to get into that part of town. You won't be able to move outside your home.

    (00:58:21):

    And things like that were happening during COVID, particularly in China where cameras would be put outside people's homes. So that's one form of what that totally surveyed automated environment might look like. And it's frankly not very good news. But for the elite who are experiencing some sort of techno-utopia, well, it could be much more pleasurable. And again, I suppose if you are only interested in perhaps yourself, kind of hedonistic activities, in your guaranteed some sort of level of sustenance, maybe it's okay. I don't see democracy though surviving in those circumstances and what's happening to the environment. Well, anyone's guess it's not going to be good, but you are not going to hear about it. It's a world in which the total manipulation of the truth is possible. We think actually we've gone quite away to that already. And if you think, and indeed you're talking about, well, what's going to happen when billions of people find themselves in unlivable parts of the world, the prospect of climate genocide of one form or another comes onto the agenda.

    (00:59:35):

    And unfortunately what's happened in Palestine is a pretty good indication. You think, well, how could genocide of any kind happen in the full light of day with people watching it 24 7? And we have our answer. Certainly in smartheid, you wouldn't be able to see most of that. You would just be hearing stories of whatever is terrible, natural disasters nobody could do anything about. But heroically, we were trying to save as much people or whatever lie would be necessary. So if we're talking about the increase in surveillance and the increase in automation of every kind, and we've been talking about United States and Trump and everything that's happened there, which makes all of this sound so much more real, so much urgent. But we have in the UK our own version of this under a supposedly leftish of center government, which is just going all out for AI and robotics and basically in the belief that's the only thing that's going to bring economic growth.

    (01:00:40):

    And in a way, the G word is in everything I've been talking about. And that's the reason why so much of this is going to happen almost regardless of the consequences, because it looks like the only way in which you can actually create economic growth. And if we don't have economic growth, well what do we have? We don't have capitalism, and then we're going to have to look at post-capitalism, post growth. Then we get onto the interesting and good subjects. But capitalism, as I suggested earlier, will do everything it possibly can to prevent that happening from preventing people from even thinking about the possibility.

    Alan Ware (01:01:17):

    You've kind of referred to some of the political and social consequences as so much political and economic power is contained within very few companies and increasingly fewer people is inequality is rising. What are we losing as citizens in general, do you think, in this humachine world?

    Michael D.B. Harvey (01:01:39):

    I'm tempted to say kind of everything that is good about being human except our ability to make tools. Well, I think the danger fundamentally is that all our choices are being taken away from us. They're being made by, as you say, a smaller and smaller elite. We are looking at vast amounts of wealth and power. And you've got to remember, these are not just capitalists, they are investors. They are hands-on entrepreneurs actually running their companies, making day-to-day decisions and running these vast technology conglomerates. We tend to think, we kind of know them, Microsoft, that's just PowerPoint or whatever, or Meta is just Facebook or Google's just search, but these are hundreds of different companies doing all sorts of things. And you have one person generally who's making a lot of these decisions. So the inequality factor just goes back to what I was saying about how seriously you have to take this kind of humachine vision when it's held by some of the most powerful people really who have ever existed, because there's always been a bit of a distribution of power.

    (01:02:56):

    You have tend to be concentration around military, but there were political leaders who had power in their own, and that still exists, but really increasingly only in dictatorships. And the dictators tend to be going along, at least for the time being, with big tech. But what are we losing out? We are losing the ability to choose, because it is big tech who are making those choices and essentially imposing them on politicians who I think on the whole have very, very little idea of what this technology involves. So that's the problem. I think a problem is our politicians are not thinking about it. We have a problem also that there's a cross party agreement more or less on technology. So the right wing goes along with it because it supports capitalism, it supports growth. The left, certainly traditional social democracy and socialists tend to go along with it.

    (01:03:47):

    Technology has always been quite powerful in traditional socialism. And so yes, we even have so-called leftist accelerationist who say, let's go along with big tech. Let's speed it up. If anything, let's robotize everything and then make sure that we take it over. And we being right-minded socialists who can make all the right decisions for the people as, oh, it's a crazy idea. But you can see there's a sort of something good in it, but it vastly overestimates, I think the ability to take over from those who actually own all of this stuff, who've got all sorts of mechanisms built in to prevent it from being taken over. And there's just a kind of naivete that sometimes I think the left have of thinking, when you get into power, it's all going to look different and it's going to be so easy to run things and actually turns out it's sometimes more complex.

    (01:04:40):

    I've been trying to get the Green party in this country to think a bit more seriously about it, but probably without success. I think there's still an element of some good regulation is all that we need to turn what could possibly be a bad into a good. And I think that's too much of the way in which AI in particular is presented as something that could be fantastic in terms of transforming things for the better. But we've just got to watch out for some of the more negative things. And call me a pessimist, but I tend to see it as being something that really is basically quite bad and we should look at it the other way around and think of something that is actually inherently very dangerous.

    Alan Ware (01:05:23):

    Yeah, I like how in the book you mentioned the historical example of British workers from 1750 to 1840 as steam power and new industrial revolution technologies were introduced, their wages fell by 5%. They were definitely getting poor even though productivity was going up hugely. And it was unions and political parties demanding equality, the revolutions of 1848 throughout Europe scared the pants off of British elites. So it was definitely a bottom up insistence on greater equality. It wasn't some trickle down noblesse oblige from the tech titans that allowed for that greater equality.

    Michael D.B. Harvey (01:06:05):

    Absolutely, and I think that's a very good point and it's worth thinking about in terms of the way in which this whole technological revolution is being presented. It's often certainly this side of the Atlantic being called a fourth industrial revolution, and that's a very clever way of putting it because it does suggest this kind of linear progress. And the idea is look, yeah, it's a bit disruptive and it's scary and we dunno quite what's going to happen, but it's always been like that and it's always worked out well. But as you say, well it didn't work out well for three generations of misery who actually went backwards. They went from earning some kind of living in a rural environment where they were totally integrated into that rural world to living in hand to mouth existence in miserable industrial towns working 16 hours a day alongside their kids.

    (01:06:59):

    Absolute hell. And as you say, it was only when you started to get to the point where unions were forming in what were in a sense ideal situations for collective action. Lots and lots of people in the same place in factories doing the same kind of things. And yes, more and more strikes, more and more pressure put on employers. You started to get democratic reforms, social reforms. At the same time there was a certain incentive for employers to go along with that because potentially there was more productivity from educated healthy workers from sick illiterate workers and places like Germany and so on were pushing ahead with that more. So there was competitive element as well. As you know, in the book I also sort of question whether those things apply to quite the same extent. I don't think this is like a forte. The fact that something's happened before doesn't mean that it's going to happen again in the same way.

    (01:07:59):

    I mean it's all about step changes or quantitative changes where something is suddenly different. And of course, automating the way we think, the way we feel, everything we do is quite different from automating how we thresh corn or weave textiles. Most people even then could probably get by without those particular skills. It's a fundamental change, as I say in being, but also goes back to what happens to the rich without the poor or what happens to the poor when the rich don't need them anymore. We have that problem, but I certainly think the sooner that people get their heads around seriously that the idea is to automate virtually every job in virtually every area, then we are going to get I think some resistance. And it's precisely those arguments to say, look, the only thing that's really going to bring about progress here is the majority asserting their rights and certainly rejecting any notion that either you just follow the technology and it automatically leads to something good or thinking yes, this is just another industrial revolution and they always turn out right in the end because this one is just unlike anything that we've been through before. It's just completely unlike it.

    Nandita Bajaj (01:09:20):

    You've talked so much about this humachine utopia for these tech elites, most of your book, you kind of lay the groundwork for what to them is a utopia is really a nightmare for the rest of us. And the worst case scenarios you talk about towards the end of the book is this techno- dystopia when it collides with planetary breakdown, with automated surveillance states using violence and humachine corporations, they continue to act unchecked. And there are two value systems that you talk about, one that underpins capitalism and another that supports ecological democracy. What are these value systems and how do they shape the kind of world we're building both socially and psychologically?

    Michael D.B. Harvey (01:10:10):

    Okay, well, let me just say I came to this idea of a set of values underlying capitalism and an oppositional set of values underlying ecological democracy, largely in my previous book, which is called Utopia in the Anthropocene, which is like a 12 step change plan for an equitable and sustainable world. And the first step was actually to set a new series of goals because I think the problem at the moment is in a sense we are following the goals of capitalism, which is, it's in the name, it's whatever is good for capital. And in a sense, I mean we shouldn't be surprised if profit is constantly put above people and planet when that's the world system. That's what it's there to do, is to create never ending economic growth on a finite planet because that's what capital requires. And that of course is what's pushing the humachine revolution.

    (01:11:08):

    So I just came up with a set of alternative goals, which I call SEWP, sustainable, equitable wellbeing, planet-wide. And I sort of constructed a model just around that. But I think it's very important to have some kind of counter intentionality because one of the great problems I feel is that big tech knows what it's doing. It has a very, very clear idea of where it's going, what it's prepared to do to get there. It even knows who we are, it knows who the enemy is. It explicitly defines things like degrowth and sustainability as obstacles to growth that need to be removed. But I don't think we have the same kind of sense of what our goals are and where we need to be going.

    (01:11:56):

    And SEWP is fairly basic. I think there's no planet for eight to 10 billion of us without sustainability. There's no sustainability without much, much greater equitability where there's something in it for all of us, and we get rid of this pyramid hierarchical system which we've had since the beginning of city state civilization, which is now in potentially its steepest form ever, perhaps even more than in the time of the pharaohs when the actual pyramids started emerging. And we need to focus on an economics and the whole way of thinking about society, which is based on wellbeing and the wellbeing of humans and the environment rather than the wellbeing of capital. And that needs to be planet-wide. Again, we sort of take it for granted how much of everything that is done politically is done on the basis of what is good for the global north, that's got a western-centric way of thinking about most mainstream politics and economics, which totally disregards the global majority. And I think also planet-wide is a way of focusing on that holistic systems wide way of thinking, which is just completely ignored by highly reductionist partialist thinking, which is constantly reducing something to whatever is smallest and easiest to handle.

    (01:13:18):

    So once you have the goals, and this is kind of basic organizational development from my past working in that, you want to think of a certain behavior, a certain set of behaviors which characterize the old system and that you want to actually encourage in the new way of doing things. So the old system and the existing system of capitalism, and to some extent all hierarchical pyramid-based civilization is based on what I call CIMENT with an I. So that stands for competitiveness, individualism, materialism, elitism, nationalism, and technologism. So technologism we've already talked about quite a bit. Competitivism is just this very simple idea that you are at war with other people. Life is a kind of war. We can all be competitive. This is a trait that we already have and even the most cooperative person has elements of competitiveness and this can be brought out.

    (01:14:18):

    But we have a system which is fundamentally geared around competition, competing with others, competing at work to get work, competing when you're in work, going to being educated to some extent as a competitive process to get the job. And then if you do get work, you find yourself, you're competing with other organizations within a world system in which nations are competing with each other. And clearly many entrepreneurs have this hyper-competitive view which reduces everything just to a kind of war where there's only two things, there's winning and losing and winning is great and losing is horrible and you want to do anything to avoid losing. And I know working with entrepreneurs, working often with pretty successful people, they're often driven more by the fear of losing that, the wealth, losing status, everything that goes with it. So that competitiveness I think is a very, very powerful kind of element.

    (01:15:18):

    And of course it's absolutely based to how capitalism works and to some extent justifies everything that it does. Individualism, again, we talked about in some respects in Silicon Valley and the kind of Ayn Rand things, which says first and foremost, your responsibility is for yourself and possibly your family. Margaret Thatcher, remember famously said, there's no such thing as society, just individuals and families, that society doesn't matter. It's just you up against everyone else and you've just got to keep battling and fighting and developing yourself, but developing yourself, not necessarily for what is good for society, but particularly for yourself. At the same time, you have an individualistic system that says yes, that those people who are talented and brilliant and serve society, it's the entrepreneurs who are the wealth providers, the job givers. Materialism, yes, you often find that among very rich, very successful people. And you say, why do you keep doing it?

    (01:16:18):

    Why does it matter? You've got enough money. And often the reply will be, well, it's just a way of keeping the score. And they certainly don't like the idea when someone seems to be scoring more than themselves. So again, it's a kind of powerful dynamic. And we do live in this kind of intensive consumer society where materialism becomes, as other values kind of disappear, the only thing that really matters. E, Elitism, the fourth CIMENT value again is kind of contradictory, but it's still there. There's still an acceptance that actually if everyone was competitive and individualistic, you would have what Hobbes feared, which was a complete war of all against all, and everyone would tear each other apart. So you need a kind of elite system which keeps everything kind of in some form. And as I say, we've inherited that really from the earliest days of city state civilization when the much more egalitarian and indeed ecologically connected ways in which hunter gatherers and even early agricultural societies based themselves, that kind of world or what I call the eco-equal world, which is how actually Homo sapiens has lived for the vast majority of our history was gradually replaced as agriculture came into the picture.

    (01:17:40):

    And it was then possible for a very small elite to be able to own the means of food production. And with that all real sources of power. So elitism is still there and in fact is there more powerfully than ever. And then we have nationalism, quite a recent invention really only came into existence essentially in the 19th century, and it's partially related to capitalism and the more developed stages of imperialism where you define yourself in terms of your nation. Particularly over the past five, 10 years, we've an upsurge of nationalism as anti-migrant, right wing ploy to fix people into their existing conditions even more firmly. And again, CIMENT is meant to imply a way of making things rigid, making it difficult to see an alternative to capitalism, paralyzing people within the status quo. And technologism, as we've seen, is kind of the way that that's all put together as some kind of future, which is going to lead us to something better just as long as we allow technology to dominate.

    (01:18:49):

    And that technology implies mastery of nature. It implies the superiority of human made artifacts and machines and systems over natural ecosystems and nature general in the way that I suggest it goes back to the science revolution. So what's the alternative? CANDID, which is cooperative, it's altruistic, it's non-materialist, it's democratic, it's internationalist, and it's deferential to nature, in that sense it's ecologism. And this I think is the system of values that we need to actually bring about some kind of SEWPs world, a sustainable equitable world based on wellbeing planet-wide. It's about bringing out the cooperativeness, which already exists, and in fact, capitalism couldn't really exist as it does without the vast amount of unpaid voluntary work, which tends to keep the whole system going, the unpaid reproductive work of generally of women who are not paid for their work, for others who volunteer for all kinds of things.

    (01:20:00):

    Just the kind of cooperativeness that people have in communities where people realize that if you don't do that kind of it won't get done. In some ways the fewer public services you have the more likely in poorer communities that you get that kind of cooperativeness. And again, as I say, we can all be cooperative apart from perhaps psychopaths who are the most extreme version of the competitive individualist. And unfortunately it may well be that some of our big tech titans quite possibly fall into that realm, will come quite close to that antisocial personality behavior, people who lack any kind of empathy or real ability to identify with other people. Empathy is I think a key element in cooperativeness, that ability to identify with other people to realize that you get the best out of yourself if you work with other people. And that really leads on to altruism or kind of group orientation, which again is I think fundamental if we're going to have any chance I think of defeating all that's being thrown at us, both in terms of big tech and in terms of course of the climate crisis, which I haven't talked about that much, but it is the other great crisis that is there as though we didn't have one.

    (01:21:22):

    And of course the two somehow go together in this kind of weird way in which big tech is essentially offered as the solution to the climate crisis through geoengineering and indeed big tech, green capitalism is offered as something better than brown fossil fuel capitalism. So that's another kind of dynamic that's going on. But yes, altruism is very important. Non-materialism, again, I think in the sense of trying to bring out a much more kind of spiritual sense of an appreciation for nature, appreciation for community, appreciation for the world, that we have a reverence for life. I mean, I'm not a religious person and I think when religion gets organized and becomes state owned as it has a horrible tendency to do, it kind of loses quite a lot of what is really there. But I think we do have potentially a very, very powerful sense of empathy.

    (01:22:22):

    It comes back to that, a sense of feeling for others and for I think a world that goes beyond our world and perhaps that goes beyond our lives as well, a sense of responsibility or even love for those people who come next. I think it's almost an extension of altruism to actually do things for strangers, help strangers. Of course, we famously foolishly will die for strangers, particularly when it comes to war. So that ability, again, that's one of those things that we have, which we pretty well take for granted, but which we need to work on more. We haven't developed enough. I think all of these elements, all of these values are there, but they're kind of underplayed, underdeveloped, and we can see reasons why capitalism doesn't want us to develop those things and hierarchical elites in general don't want us to develop them. The fourth element that the counter to elitism is democracy and I guess a much more egalitarian approach to democracy than most representative democracy really allows for in some ways, yes, it's great to have a vote every four or five years, but it's a pretty imprecise way of actually getting across what are desires and wishes for the kind of world we want, especially in a world system which we haven't explicitly chosen.

    (01:23:51):

    And that's why I'm a big champion of assembly democracy, of citizen juries, of ways in which small, perhaps randomly selected groups can actually voice their opinions. That goes back to the ancient Greeks or to Athens, which actually use that method of democracy far more than electoral democracy. Aristotle, for example, is always rather suspicious of electoral democracy because the oligarchs tended to always work it out their way. And in fact, the oligarchs were the only people who could afford to be politicians, so genuinely randomly chosen citizens like we choose juries, certainly in the UK. We think it okay to have someone who has no legal background at all to be someone who potentially makes a life and death decision, but we don't think that's good enough to make decisions about should we going for nuclear power or should we be allowing a AGI to become a major plank of the government's policy or whatever.

    (01:24:52):

    So that I think is one of the ways in which we can get more democracy into our system, more egalitarianism in a way in which we can fundamentally, I think, change the way we make political decisions. Internationalism is the fifth element, and again, I think it's fundamental. It's more challenged now, much more really than 10 years ago when I first came up with that. For the reasons I've spoken about, nationalism is becoming the most fundamental thing, and of course it's increasingly playing a major part in the way technology is being developed. It's already playing a major part in the way actually the climate crisis is being allowed to develop in all sorts of ways, and the way in which the kind of enormous amount of resources which Europeans and to say North Americans are consuming tends to be seen only on a nationalist basis. The way forward for internationalism is difficult, but it has to be the basis on which solidarity is based and the way in which some kind of species identity is formed.

    (01:26:02):

    Again, that's what's so under threat. It's always been a difficult one to be able to think of what is it to be human and what is standing up for being human. Human rights is one expression of that. That's kind of very much under attack now, but we have the big tech attack, which is the ultimate, which is actually to say humans, so what, just kind of fairly inferior machines that can be upgraded. This is the ultimate dehumanization. Humachination is a kind of dehumanization because ultimately it degrades everything about being human, about being alive. So internationalism I think is one of the ways in which desperately we need to create some kind of unity, otherwise we're just heading for everyone in their own kind of foxholes behind fortress states, which are likely to develop and what will the victims of the climate crisis are liable to be left to their own devices.

    (01:27:00):

    And yes, the last deference to nature or ecologism, I think that's a fundamental. Again, it's as important of seeing ourselves as humans. It means seeing us as biological entities and respecting that. Can we see ourselves as being rooted in nature and nature which is not just some kind of mechanical system, but that is itself a kind of choice-seeking, self-creating process as biological phenomenologists see it. This kind of integrated biological way of opposing the mechanistic system, which I think is very, very fundamental, which sees the cell not just as some kind of component of a system, but something that is a network in itself and that to some extent, even at a very basic level is kind of making choices and it's can we rediscover that as a value that expresses itself in all sorts of ways as something that can then take us forward to a very, very different self-directed future, which is better for all of us and for all life forms.

    Nandita Bajaj (01:28:12):

    That's a really, really nice, incredibly broad picture of both the dystopian vision of these tech elites that you've laid out, and then also the alternative kind of life-affirming possibility that we can reclaim, as difficult as that project seems to be at this time in human history. Thank you so much, Michael.

    Alan Ware (01:28:37):

    Yeah, thank you, Michael.

    Michael D.B. Harvey (01:28:38):

    It's been a pleasure, Nandita. And Alan great talking to you.

    Alan Ware (01:28:41):

    That's all for this edition of OVERSHOOT. Visit populationbalance.org to learn more. To share feedback or guest recommendations write to us using the contact form on our site or by emailing us at podcast@populationbalance.org. If you enjoyed this podcast, please rate us on your favorite podcast platform and share it widely. We couldn't do this work without the support of listeners like you and hope that you'll consider a one-time or a recurring donation.

    Nandita Bajaj (01:29:09):

    Until next time, I'm Nandita Bajaj, thanking you for your interest in our work and for helping to advance our vision of shrinking toward abundance.

More like this

Previous
Previous

Capitalism’s War Against Animals

Next
Next

Radical Alternatives to “Progress”