James Bridle: ‘Artificial’ intelligence and lies of a computerizable world (ep393)

The effect of most contemporary technologies is to hide things. That’s how we’ve designed the world. It’s made us incredibly lacking in awareness of the systems that we find ourselves in.
— JAMES BRIDLE

In this episode, James Bridle investigates and complicates modernity’s entanglement with contemporary technology. Ever careful not to throw the baby out with the bath water, they urge us to think critically about the impact of technological advances—particularly as they are embedded within dynamics of power, systems of complexity, and definitions of “intelligence.”

Join us as James and our host Kamea discuss the limitations of data science, how corporations could be understood as a form of ‘artificial’ intelligence, how technology is making the world more difficult to understand, and more. In breaking down the fallacy of the earth as a computational model, James emphasizes the process of cultivating relationships which is at the heart of thinking and feeling—processes that call on us to activate technologies of relationality.

Subscribe and listen to Green Dreamer in any podcast app, or read on for the episode transcript.

 

About the guest:

James Bridle is a writer, artist, and technologist whose artworks have been commissioned by galleries and institutions and exhibited worldwide and on the internet. They are the author of New Dark Age (2018) and Ways of Being (2022), and they wrote and presented the radio series "New Ways of Seeing" for BBC Radio 4 in 2019.

Artistic credits:

Episode references:

Expand your lens:

 
 
 

If you feel inspired by this episode, please consider donating a gift of support of any amount today!

 
 

Transcript:

Note: Our episodes are minimally edited. Please view them as open invitations to dive deeper into each resource and topic explored. This transcript has been edited for clarity.

James Bridle: I consider myself to be someone who sort of grew up with the Internet, in the sense that I'm part of that cohort born, in my case, in 1980, but around that period when we're old enough to remember before the Internet, but was nerdy enough to get into it quite early. So I first went online in late 1992, 1993, but pre-web—the very early, consumer Internet and then watched it change over time, quite a lot. I was on it throughout my adolescence, which was very formative, and then it shaped my education and my choice of education quite strongly. Over that time, it turned from a hobby into somewhere you could go and work.

So it's just been a big part of my life. And that's why I've always been interested in it and working with it in various ways. But that thing of it becoming more of a bigger part of everybody's lives, means it's also been something that's interesting to think about and work with. [And] that's the origin of a lot of my critical writing and thinking over the last decade or so, thinking about the implications of that.

Kamea Chayne: It sounds like you've been very aware of your interactions with technology and their impacts on you as well, because a lot of other people are sort of just embedded within this whole matrix without really realizing that we are being impacted.

There's a common acknowledgment in the conscious consumerism space where people talk about how disconnected we've become from the source of all the things we buy or the food that we eat—so that we've lost touch with the impacts of a lot of our choices. That is one way to put it, but another way to address this, especially taking note of one of the key contributors is, as you share: “For everything made technologically convenient, something is hidden. It’s usually people who are worse off, or getting worse off, because of these tools.”

I would also add to that, and I'm positive you would agree, that it also often includes the more-than-human who become worse off but are invisibilized as well. No matter, I would appreciate it if you could elaborate on this remark that we've designed a lot of technologies we use everyday to hide things from us, and more specifically, who's designed those technologies to lead to these impacts and what sorts of incentives drive those decisions?

James Bridle: The first thing is…

As technologies become more complex, they just generally become harder to understand.

It's really impossible for any single person to hold, as an example, the whole of the Internet in one's head. There's just too many connections, too much of it there. And that's true of so much of contemporary life. We live within systems of such complexity that so much of that is unknown to us, and those systems have been extended up to, and in some cases kind of past, breaking point, I think, both for our systems of living and for our cognitive ability to understand them. That's partly just a result of increased complexity. It's a result of living at this planetary scale, as so many of us do.

We live within a consumer society that wants to provide us with all of these things. And I think a lot of that is really the unconscious result of living within such complexity, but there's levels of it. So there's an unconscious accumulation of stuff that we all live within. And then there's the conscious but well-meaning decisions, which I think is a big chunk of this.

As an example, just think about the computer systems that you use every day—they are designed to make your interactions with those systems as simple as possible, as friction-free as possible, and they're doing that to make your experience as easy and simple as possible. To do that, they have to hide a lot of the complexity of what's happening. That just means at the basic level, there's some really complicated code, underneath the hood of something, like online shopping. There's huge amounts of work that's gone into making that easy. And from a very simple perspective, that work has been done to make your life easier.

Of course, [the] side effect is there's lots of money in doing it. So there's a huge financial incentive in doing that. And because of the capitalist systems that we live within, most of the costs of that, are external. So it has the effect of driving down prices, so labor is paid less in all kinds of places—the people who make the goods, people who work within the distribution centers, if we're going on with the online shopping analogy, but also huge amounts of computation that's happening, computers getting hot in very distant places, huge data centers filled with machines getting hot, as you do all this work, consuming huge amounts of energy that you don't see. And, the delivery costs, the gases coming out of the trucks, bringing you that stuff, all these things are externalities, all these things are the extra costs around everything that we do.

The effect of most contemporary technologies is to hide things. That's how we've designed the world. It's made us incredibly lacking in awareness of the systems that we find ourselves in.

Sometimes that's done deliberately; most of the time it's done thoughtlessly, at least by the people who do it. Though it is built into capitalist systems.

But there are a couple of really important extra points. The first of which is that technology doesn't have to do that. That's not the only way it can operate. One of the extraordinary things about technology in general, and I think the internet in particular—though, again, it doesn't often do this—is that it makes those things clear, if you want to go looking for them, the code to run things is there. We used to operate in systems where you had far less access, actually, to information about how the world works. If you actually tried to go out to get it. So if you are interested, if you are curious, you really can go and find out how stuff works.

And the second thing is I think people are more aware than we like to think of this system, of the way in which so much of the world is hidden from us. We're not always consciously aware of it, again, but we're psychically aware of it. And that really, for me leads to a lot of the kind of uncertainty and deep-seated fear that shades into kind of anger and conflict that we experience really as the dominant tenet of the world today. We know that we lack a real understanding of the world around us, of the forces that shape our lives, and we're not so stupid that we're completely unaware of that system, but we have so little access to it that it's doing this deep psychic damage to us, just as it's doing real physical damage to everything else that lives on this planet.

Kamea Chayne: In addition to these technologies being designed to make the experience simpler for people, there's also the reality just that the world in all of its vast complexities are kind of unframable in totality by computed language; also because we can only compute things we understand and well, we're coming to see that there's still so much we've yet to understand about the world.

Something that I've been thinking about as I see more and more people using various smart technologies, like watches that constantly monitor people's heart rates, or nutrition tracking apps, or social media, or GPS technologies and so on, there’s this difficult yes-and realization, in my view, that yes, while in so many scenarios and for so many people, these apps make our lives easier, they possibly save lives sometimes, and especially for people who are differently abled, they can really help to improve life quality, and so on…

I cannot help but think about how in a way, they also dull our capacities to be attuned to and engaged with ourselves, our senses, with other people and their complex and dynamic social cues, and our world and all of their rich entanglements. Like, why aren't we being taught how to better listen to our own complex bodies and their diverse needs which cannot be standardized by formulas? Or why do people increasingly not have the time or capacity, even, to slow down enough to pick up on our bodily signals or environmental signals and be more activated and engaged human beings?

So it's this thought that when we become too reliant on and trusting of these technologies, believing them to be more real than the world which they have attempted to capture and translate and reduce into computed language and standardized equations; we actually become less smart ourselves, less understanding of what our own bodies symptoms and the symptoms of our living landscapes mean, less able to navigate social dynamics and conflicts in the real world which are way more complex than their limited representations online, and so on.

Your talk titled: “Is technology making the world harder to understand?” really reminded me of these uneasy questions. And I would be curious to hear you shine a light on this thought of technology possibly making us less attuned to ourselves and the world, and sometimes even less intelligent in the decisions we make.

James Bridle: Yeah, absolutely. Your example of all the kind of self-tracking and quantification of the self, it used to be called, those systems of constant monitoring of almost everything we do. They are really a kind of internalization of surveillance, an attempt to kind of control oneself, in various ways, that, I do agree with you, I think get in the way of a real understanding of ourselves. They're already designed to be essentially surveillance tools, because no one's selling these things without the intention of getting various useful data back, which doesn't accrue to us.

So since you are using anything that's connected to the Internet in any way, part of whatever you're doing is providing benefit elsewhere. So you're selling the labor of your body in all of these ways. And you're also relying on someone else's interpretation of that information, rather than your own, as you say, connection to your body or to the world. And I think quite crucially, all of this is based on the fundamental philosophical assumption that the world or the body is knowable in this way. This really emerges long before this stuff was getting written down as computer code.

Computer code is the current, perhaps final, perhaps not, iteration of this idea that is much more of a 16th, 17th-century Enlightenment idea that gave birth to the scientific method and "rational" thinking that the world is knowable.

That there are these singular truths that can be ascribed universally, that apply to all bodies, to all worlds, and that can be monitored, and particularly can be optimized in this way. That's the really great lie, is that at some point, if we could just only gather in enough data, if we could just categorize everything, if we could put every single piece of information into the right box, we would gain this totalizing worldview that would allow us to... Essentially, to control the future, by controlling every aspect of our being and our awareness.

This has been the dream of pretty much every domain of dominant science for several hundred years. And it's led to this incredibly compartmentalized binary ones and zero world in which we currently find ourselves. And at its heart is this huge paradox that everything that this system attempts to know, it destroys. Everything it attempts to categorize in this way, it actually kills.

The truth of the world is relationships. The truth of the world is an ecology of things in conversation and relationship. When you attempt to study any of them in isolation, you lose everything that makes it meaningful or, in fact, living.

You see this in the sciences. If you look at science like botany, which has spent several hundred years chopping plants up into ever smaller and smaller pieces in the totally mistaken belief that you can understand the life of a being by looking at its component parts, as though it is indeed a kind of a tiny machine, that you can just unscrew all the little pieces and categorize each one of them, rather than understanding it as a complete, complex organism that exists within its environment, within its context, with its own history of individual relationships, just as people are.

And so each of these little quantifications are just more attempts to fit aspects of ourselves into a generalized model that simply doesn't hold. It doesn't hold, but it also is based on this great fallacy that we are somehow separable from the world around us, rather than being an integral part of it.

Kamea Chayne: That's super powerful and there's certainly a lot of costs and impacts to the world being rendered simpler and simpler.

From your book, New Dark Age, you write: “We have been conditioned to believe that computers render the world clearer and more efficient, that they reduce complexity and facilitate better solutions to the problems that beset us, and that they expand our agency to address an ever-widening domain of experience. But what if this is not true at all?

By reifying the concerns of the present in unquestionable architectures, computation freezes the problems of the immediate moment into abstract, intractable dilemmas; obsessing over the inherent limitations of a small class of mathematical and material conundrums rather than the broader questions of a truly democratic and egalitarian society."

This was really powerful, and I feel like I have a grasp on what you're speaking to here, especially as I use it to apply to what we've explored on the show before, which is the grave limitations of reducing climate change into simplified equations of carbon emissions and sequestration. But, either with that or some other examples we could better relate to, I wonder if you could clarify and elaborate on this observation that computers do not necessarily facilitate better solutions to the problems that beset us.

James Bridle: Computers don't create any solutions at all. They simply render the problem in like, a different way of thinking about it, perhaps. And sometimes those could be super useful. They're very good at thinking about certain classes of problems. And in fact, in many ways, computers are better at addressing, at framing and sometimes at solving certain classes of problems than humans are. Mostly mathematical problems, mostly problems that are complex in certain classifiable ways. There's lots of everyday things we have to solve, like, what's the safe weight to put on a bridge made out of such and such a material, engineers are very good at that, but you take it up to a certain level, you want some computers involved, they're very good at these things. That's wonderful. That's brilliant.

But the thing is, we've been building computers for whatever it is, 70, 80 years now. And when they arrived, they were so amazing and they remain so amazing in so many ways, and we're so easily impressed by shiny things, that we assumed that computers were good at solving all the problems. We looked at this incredibly powerful machine that was capable of doing things that not only no machine had been capable of doing before, but also was capable of solving a whole bunch of really tricky problems that obsessed us and that particularly obsessed a certain class of people, those in power, that were particularly good at making money within certain systems that we built, that were accompanied by a kind of priesthood class who looked really impressive and told us how impressive these machines were, that we basically thought that these were all-powerful machines. That's the mental image we have of them.

A certain way of thinking came about that I really, really notice really intensely in the present, particularly amongst that priesthood class, of which the financially powerful, managerial politicians, people high up in all kinds of industry, particularly financial people who work with stock markets... These people all believe that the world is essentially mechanistic or computational in this way, and as such, that the computer is a kind of model of the world. So the computer is capable of modeling any aspect of the world.

I think a lot of environmentalists, when we get into forecasting, and modeling also fall prey to this fallacy, to a large extent.

The fallacy is simply this: that the computer is kind of a model of the world. The world is like a computer, running along a similar kind of set of logics which is then capturable by the machine and simulatable.

If you believe that, then you kind of think that, yes, any problem can be solved by a computer, or by the application of computer-based technologies. And that just isn't the case, it's so far from the truth. The world is not like a computer. The world itself runs on entirely different logics and occasionally, and often, not on logic at all.

So, while the computer is good at modeling certain aspects of the world, when that computational metaphor takes over, it in part confuses us, it part abuses us, and it partly just completely fails altogether to be a useful way of thinking about the world. It's one beautiful way of thinking about the world. And I do say beautiful—despite everything I'm saying, I'm a fan of computers as well. I spent a huge chunk of my life working on [them], and I think they're powerful tools in all kinds of ways, but they must never be mistaken for how the world works or essentially, put in charge or in front of how we actually address the situation of the world as it really is.

Kamea Chayne: Right. They should really be understood as tools and merely tools, and not something that is more real than the much more dynamic and complex world itself.

The last thread I want to weave into this portion of our conversation is, you talk about the dangers of the western scientific method relying so heavily on past data or data science as reference points. One example might be climate change throwing a curveball at sciences that relied on historical data of the Earth's weather patterns or chemistry; another example you also pointed out is artificial intelligence often trained through absorbing historical data—from contexts where racism was rampant.

Can you share more about these points that address the limitations and dangers of data science, particularly for things that have greater socio-ecological implications? Because I find this to be really important in understanding the subjectivity of knowledge and this idea of reliability coming from the collection of just enough data points—as if those collections can really be divorced from the often skewed contexts they emerge from.

James Bridle: I think you've given two pretty solid examples. I'm not sure I have a huge amount more to add. Absolutely, any computational system you look into, you'll find various biases within it because of the place in which it was created.

In the new book, I talk quite a lot about the idea, the metaphor, not necessarily reality, but the metaphor, of raising AI within a particular worldview. I think this is a useful way of thinking about it when it comes to any computer system, whether you want to call AI or not, or even something quite dumb, or really just anyone or anything... We are products of our environment.

If you raise an AI, for example, within the kind of hothouse of a large contemporary capitalist corporation, then you're going to get a kind of organism that sees the world through the lens of a large contemporary capitalist organization…

which means it's going to be profit-seeking, fairly uncaring about the consequence of its actions, and blind to the individual lives and health of the people and other organisms around it. That's simply what it's going to be, as a product of the way it was raised, of the environment of its childhood, so to speak. And that's true of pretty much anything that we make. It doesn't have to be a complex computer system. Everything that we make that surrounds us is the product of those kind of influences, as we are, as are all other beings on this planet.

When you take a narrow slice through that of experience, which is already neutered and damaged essentially, reduced in a violent way, by being represented as data at all, it's already lost so much of what's important. And then that data is such a narrow slice of experience. The idea that you would get something out of that which could be described as anything related to impartial, is ridiculous. We barely expect that of many humans. Hopefully, we expect it of none. But we have, unfortunately, political systems and indeed all kinds of financial systems, community systems that tend to convey that onto people. And that's already bad enough.

When you start to give that possible idea of some kind of neutral point of view or neutral subjectivity onto non-humans, then you're really making a huge mistake.

Kamea Chayne: When you share that, everything essentially is sort of raised by the context that we're in, it reminds me of when you said before that we could understand corporations essentially to be AI. Perhaps you want to begin here by first sharing about the traits that are typically attributed to or expected of AI and how it acts and interacts with the world. I would love to have you share more about this provocative thought of recognizing corporations as artificial intelligence.

James Bridle: I'll be honest. I mean AI is just not that interesting, really. Actually, existing AI is not that interesting, because actually existing AI is nothing of the kind. It's just a bunch of really, really powerful computers. What's kind of interesting is what it's doing. What's kind of interesting to me is why we have this cultural fascination with this thing that we call AI—like why, as a species, we are so obsessed with this idea of an intelligent machine. Now, that to me is what seems to be what AI is perhaps for. Maybe we'll get to that in a minute.

The point about corporation as AI is just to make the point that it's useful to point to things that already exist in the world that are forms of non-human intelligence. And corporations are a very good example of this. Corporations can act in the world. They are assemblages, mostly composed of other people or people in general, but also composed of laws and various rights that allow them to take in information about the world and to act on the world. They have sensors and effectors, they're called, and they often have legal personhood. Corporations have speech. They can own bank accounts, they can own land.

They can do all these things that a human being, in this case, can often do. And they have their own ways of reacting. They respond to share prices, by acting in certain ways. So that's one of their senses. They respond to lawsuits. They respond to fluctuations in price of goods, and so on and so forth. So they act in ways that, the intelligent beings, as we consider them, act, in response to the world. Pointing that out for me is really both a way of deflating the notion of AI, or to say that you can talk about AI in many ways and most of them, as I said, are not really that interesting...

But also to point out that the current thing that we call AI, the one that gets touted all the time, the one you hear about in the news, is one of these forms of corporate AI, because it's being built by huge corporations, and therefore it's going to have all of these qualities. It's going to have all of these deeply capitalist, aphoristic qualities to it, because that's the lineage it comes from. And that's a useful way of framing it, and it's also useful [for] starting a conversation about what else that intelligence might look like if it wasn't wholly subsumed by capital.

Kamea Chayne: It’s often been troubling to think about how a corporation often makes decisions that even individual humans who work might not even make, if in those same shoes.

In terms of artificial intelligence, I think many people have a reasonable level of fear for it. And others have great excitement for it—due to their hopes and dreams for it being able to far surpass human capabilities and intelligence. But to this, you raise this very foundational question of, where does the impulse to lock ourselves out from this domain come from?

So like, when artificial intelligence reaches levels beyond human comprehension and abilities to personally engage, which seems to be the goal of AI, we have to ask why we even have desires to build things that are going to become increasingly exclusionary of what we can engage in meaningfully. And yet, still, maybe framing it this way is human exceptionalist in how it implies we should be expected to only nurture what we can understand and intelligences that we can relate to. And we will dive into this all next, but what comes up for you here?

James Bridle: I think the point about fear is a really good one. One of the fascinating things about the contemporary wave of AI is, we're all scared of giant killer robots, which is mostly what we think of when we think of AI. And for good reason, right? Because giant killer robots are scary. But what's particularly noticeable is the number of people who are directly working on this. The heads of the big corporations that are betting on this, like senior people at Google and Microsoft who go on the record and say, AI is the single biggest threat to humanity, and who are actually working on it at the same time. And I think they're both wrong. I think capitalism is the single greatest threat to humanity.

But that's because they can't tell the difference between the AI they're building, and capitalism, or whether they think there is a difference. And I would argue otherwise. But also I think it points to this kind of power—this fear of other intelligence in general. It points essentially to the deeply destabilizing idea that human intelligence might not be the only game in town. Because the people who say things like this, the billionaires, let's say, who say this, they're used to a system of exceptionalism in which they are the exception. And by whatever their leap of logic is, they, in a very small minority, are justified in being richer and more powerful than everyone else, probably because they think they're smarter.

And so the idea that something else might be smarter, that something else might supplant them is, of course, tied to the idea that something might happen to them, that's been happening to all the other ‘stupid’, ‘poor’ people, all this time.

It's just they think AI will do to them, what they've been doing to everyone else, all of this time. And so, of course, they're terrified of it.

But it speaks to a fear that this state of exceptionalism, of human exceptionalism, might come to an end. And that is scary if only large corporations and a couple of governments are building these AIs. But it's not, if you have a broader idea of intelligence, if you know, in fact, that, we've always been surrounded by intelligences, many of which surpass us in all kinds of ways, and that doesn't have to be a threatening thing that we can actually have relationships with non-human intelligences that are meaningful and full of care, even if most of us within the dominant paradigm have forgotten that.

Kamea Chayne: Well, Ways of Being is your more recent book, in which you pose this question of what intelligence even means and whether we ought to be so human-righteous as to only recognize humans as having intelligence.

As an invitation to expand who or what we understand to be intelligent, what are some examples of intelligence from other organisms you've come across that have really fascinated you, that you think could reallly humble our human intelligence in some ways and even so-called "artificial intelligence"? What are some of these findings we should ground ourselves in?

James Bridle: Even though I will happily say that there's various kinds of intelligence that may surpass human intelligence in these narrow domains, it's really important, I think, not to start it from a position of saying like, this one is better at this thing than the other one, not to like to go into this being, already trying to create another hierarchy, because that's essentially what our definition, historical and most popular definition, of intelligence, is. It's always based on this notion of hierarchy, and of course, it has humans at the top, because that's how it works. So over the last hundred or so years, we have moved, again, in the west and within dominant science, moved from a position of basically considering all non-humans to be brutes or robots or mechanisms, to a position of recognizing certain abilities of non-humans in all kinds of interesting ways.

But those have always been performed by doing these kind of tests from the human perspective. So, a really classic example of this is the mirror test. The mirror test is a test that's been used since the 1970s within animal studies to try and figure out if a non-human animal has what's called self-awareness, it recognizes itself as an individual, distinct from the world. Now, for humans, that's a really important ability. We think it's one of the things that really marks us out as being special—is that we see ourselves as individuals. We're not just machines going about our business in the world with no relationship to another or to other things. We are singular individuals, and we like to make a lot of a big deal of that.

And so one of the ways that that gets tested for is the thing called the mirror test, where various animals in deeply unnatural situations [have] mirrors placed in front of them. Usually, the technique is to sedate them in some way, and mark their faces, so they might get a mark on their head, and they'll examine in front of the mirror to see if they touch the the mark. Now, it's really crude way of seeing, do they understand that's a mirror, and do they understand that it's them that they see in it? And the ways in which different creatures respond to this, are fascinating. The first tests were done on famous apesm because we all know that apes are like us, and so therefore, they should kind of be intelligent. And you get a few different results.

And gradually over time, this test been done in all kinds of different animals. There was a really great article that came out, just the other day, where they've been trying to do it on penguins. They put loads of mirrors, like out on the tundra, in Antarctica, out on the ice, and watched what the penguins looked in them when they came past, they're not really sure. Basically, this whole thing collapses once you start to get into cultural differences with various creatures. So like, once you get a certain distance from humans down the ape line, [unintelligible] monkeys don't pass the mirror test, but they seem to pay attention to the mirrors in different ways. Macaque monkeys, for example, do a lot of grooming in the mirrors, but they don't touch the marks.

And like one possible explanation for that, although it's possible we won't know, is that for macaques, like faces just aren't that important. Butts are important, to be clear, that's their main signaling method to one another, and so, of course, that's what they look at in the mirror. And it actually turns out that humans passed this test not evenly across human cultures. So in fact, there's not even a universality of human experience to this test. There's multiple ways that humans respond to this test, and so, of course, there's multiple ways animals do. Famously, attempts to get dolphins to pass the mirror test were foiled for many years, because the dolphins absolutely refused to respond in any expected way, but did have sex in front of the mirrors all the time.

There's just not any single stable apparatus that we could set up in order to put the way we think about the world on a measurable parity with other creatures.

But all of the examples I've given and many, many more than I could give, all point to the fact that there are incredible types of thinking going on. And what's important to think about in these types of thinking, is that they're embodied and that they're relational. What I mean by that is that the types of intelligence that any kind of being has, is a result of its embodiment and its experience, the kind of body that it has, so its enfleshment.

The classic example of this is tool use tests. There was a famous example of the gibbons, who for years refused to pass a particular test which involved using a stick to get a treat. And as a result, scientists just thought they weren't very intelligent. Gibbons were classed as being more stupid than a bunch of other apes, because they wouldn't use a stick to get a treat, in the same way that a bunch of other apes would. Until one day someone redesigned the experiment, and they hung the sticks from the top of the gibbons' enclosure, and the gibbons immediately grabbed the sticks, hooked the treats and had a little snack. And what was happening there is that, one way of understanding it, again potentially, is that Gibbons are bracciators—they live up in the trees, and so they have an awareness, a thoughtfulness about the world that looks up more than down, and they have bodies that are more adapted to grasping certain things, and all these things play into the way that they do intelligence.

For science, the gibbon only became intelligent in that moment when it passed this test, because we arranged in this particular way. But of course, they were intelligent all along. It was just we needed the apparatus to see their intelligence, for ourselves. So intelligence is embodied in these particular ways, for all different types of beings, some much more radically different to us than apes. And secondly, I think the other really important quality of intelligence I think about a lot is that it's relational—it comes out of a relationship. It's not a thing that only exists within the head or the body.

IQ tests? Completely meaningless. Because what they're measuring is only what's going on inside a mind. Whereas intelligence is something that arises in relationship to the world around us, and can in fact arise in relationship to pretty much anything. And when it does, it's a mutual quality. So our intelligence is not a singular quality.

Our intelligence is a measure of our relationship with other beings, and that exists between all other kinds of beings as well.

Kamea Chayne: We've explored this idea of how knowledge is relational on the show before, so the similar idea that intelligence should also be seen and recognized as being relational.

With all of this coming together, maybe this isn't just a call to redefine intelligence or simply expand or shift its meaning and measures within the same underlying worldview, but really an invitation to undefined intelligence all. Is that the key essence here and the underlying message?

James Bridle: I set out to write and think about intelligence as my way in, to thinking with other beings, and that as my starting point was conditioned by my background in technology, my background in the dominant science, by the received ideas I had about intelligence. And, I learned a lot through that thinking and writing process, and that, on the one hand, made me rethink my definition of intelligence and try to reframe it in certain ways, but also to think that maybe intelligence is just not the most important quality. It doesn't have to be. It certainly shouldn't be the thing that we use to decide much else. It's one interesting thing that's in play here. It may allow us, through rethinking it, to have different relationships, but it's definitely gone down, in my estimation, let's say, as being the most important, or even a particularly important, quality of forming the basis of relationships.

Kamea Chayne: Finally, I want to underscore this from your writing. You share: "The awareness of historic injustice is crucial to understanding the dangers of the mindless implementation of new technologies that uncritically ingest yesterday’s mistakes. We will not solve the problems of the present with the tools of the past."

WIth this, I would love for you to close off our main conversation here by sharing more about your remark that you are opposed to solutions or solutionism. And what additional offerings of questions we can continue to think through to guide us towards collective liberation and alternative futures?

James Bridle: My opposition to solutionism really comes out of the technological practice we discussed earlier because anything that emphasizes single universal solutions is going to be relying upon those calcified processes, that you described so well. It's going to take a fixed past and probably, fragmentary understanding of the world, and try to fix something that it barely understands. And that's clearly not the way to do it. There's no end point to the processes that we find ourselves in. I think that's also the greatest problem with that view of solutioning, is that [it] imagines some fixed future point, in which all these things are solved and the world is perfect and made whole again. That's not going to happen either.

We live within a constant process of an unfolding and becoming, and becoming together, in which there's no fixed end point that we're ever going to reach. We just have to move evermore into ever better relationships with one another, but those are going to keep changing and unfolding.

Rather than thinking about solutions, we have to think about processes. It was one of my personal heroes, Aldous Huxley, who has put it, means not ends. And we have some of the means to think about that.

If we put relationships at the heart of our thinking rather than fixed ideas or supposed utopias, those solutionary points at the end of time, then we can start to shift our relationships.

And for me, increasingly, that comes from having better relationships, certainly in the context of planetary relationships with the world around us. It's very obvious that the entire basis of our relationship with the planet as it currently exists is fundamentally flawed. And that's not going to be shifted by anything resembling a technology, of any kind of tweak to the capitalist ways in which we live at present. It will only be changed by a complete and utter philosophical shift in the way that we relate to the Earth.

Because, it doesn't matter if you think a field should be dug up for oil or if you think that field should be fenced off and protected and preserved. If your underlying mindset is still that that field belongs to humans to make decisions about, without the input of other beings, there needs to be a fundamental shift in the way that we relate to the planet and all its other inhabitants, in order to even think about making the kind of decisions that are that put us into better relationships, into the future.

And that for me has been the great realization of doing this kind of work. It has nothing to do really with intelligence, and everything to do with paying attention, listening, opening oneself to the world around us and trying to work with everyone else, in order to begin to mitigate and start to move towards a future beyond the fairly horrific effects of climate change that we see happening right now.

~ musical intermission ~

Kamea Chayne: What has been an impactful book that you've read or a publication that you follow?

James Bridle: I'll mention two books that I've just really enjoyed lately. The first book is called Imperial Mud by James Boyce. It's a really extraordinary history of the enclosure of the Fen lands in Britain from the 15th up until the 19th centuries, which really solidified for me an understanding of the relationship between environmental crimes and the crimes of colonialism and imperialism, because this was a period just before the British Empire started on its global imperial conquest, in which it prototyped those processes at home, subduing huge areas of common land that were also areas of incredible biodiversity, and in which you saw both humans and the other beings that they shared that land with, being subsumed by these common acquisitive processes. And it's an incredible work of history that really allowed me to put those things into conversation, to go back a little further in ways I hadn't understood before.

And the other book that I'm enjoying at the moment is Sabrina Imbler's My Life in Sea Creatures, it's called in the UK, but it has a different title—I think it's something like How Far the Light Reaches, in the US. But anyway, Sabrina Imbler's work. [They are] an extraordinarily brilliant science communicator, environmental journalist. And this book is talking about personal history, and particularly queer history, in relationship to marine organisms. And I just love it and I recommend it highly.

Kamea Chayne: What is a personal motto, mantra or practice you engage with to stay grounded?

James Bridle: There are two quotes, and they're both by old white men, but that's my education, I should actually think of other ones. But the two immediately pop into my head, the one that I mentioned earlier, Aldous Huxley's means not ends. The constant insistence on the way in which we go about our lives every day, matters, and doesn't just matter because it's important in the moment moment, [but also] because it defines the place that we will get to. And the only way to get to where we might want to get to, is act in the present and the everyday, how we would like the world to be.

And the other one... I've been thinking quite a lot about Emerson and Thoreau recently, and so Ralph Waldo Emerson's little pithy, consistency is the hobgoblin of small minds, I think has been written on one notebook or another since I was about 12, and it will have to do as well.

Kamea Chayne: What is one of your greatest sources of inspiration at the moment?

James Bridle: The world around me. I moved, kind of, by accident to a small island in the Mediterranean about three years ago, just before COVID hit, which obviously shaped what happened immediately afterwards quite strongly. And, I lived all my life inside large cities, 35 years in London, another five years in Athens, and few other mega-cities along the way. In the last couple of years, I've been lucky enough to have both the time and the surroundings to pay a bit more attention to the creatures that have always been there, and I hadn't even noticed. And every day is a source of utter wonder when I meet some new being who has something to teach me, and I couldn't even imagine thinking or saying those words until a couple of years ago. So I find myself incredibly lucky to be in the place that I am, surrounded by so many teachers and extraordinary beings, who, it turns out, are there waiting to talk to you, if you just shut up and listen.

Kamea Chayne: James, thank you for such a thought-provoking and enriching and stimulating conversation. I'm really looking forward to listening to this conversation and getting something new from it each time.

Note: This conversation was originally recorded at the start of 2023.

 
kamea chayne

Kamea Chayne is a creative, writer, and the host of Green Dreamer Podcast.

Previous
Previous

Vijay Prashad: Reviving collective life and scaling small gestures of care (ep394)

Next
Next

Eben Kirksey: Boundless entanglements with the virosphere (ep392)