Exit Music (notes)

It’s not techno-optimism. It’s techno-capitalist optimism.

It’s the kind of optimism that doesn’t see things like the mass extinction of species as within its scope of problems to solve, presumably because we could eventually build technology that would obviate any need (from a human perspective) for other species to exist.

I imagine instead a completely different kind of techno-optimism: one that leads to the widespread distribution of ever greater life-enhancing technē. Call this techno-ecological optimism.

(From a mastodon post of mine over a year ago that recently popped up in my feed.)

(Another Mastodon thread I'm afraid; this is how things seem to come out of me at the moment. It's overly laconic and possibly a bit Rorschach-y. Hopefully I'll be able to expand on this better in the future.)

People working in tech deserve both awakening to their part in the state of the world, and also compassion for not already knowing it or knowing how to extricate themselves yet. (And I say this to myself as someone a quarter-century into a tech career who has slowly come into some understanding.)

Not only can we understand our part in the state of the world, we can start to think about our part in the transition to a world that humans and other living creatures would want to live in.

That world is not a world of efficiency of production or transfer, but a world of care and connection and craft. A world where the way we live is an expression of the love that binds us to the real place we live in, and the creatures we share it with. A world whose sacredness is shot through with the everyday.

This world will come about again, one way or another. My hope is that the more we can create conditions that make us ready for the arrival of this world, the more of us there will be to enjoy it.

There is a lot of work to do, from the local to the global; from the socioeconomic to the ecological and political; from coordination to reconciliation.

The transition will be complex and messy and full of upheaval. We will have to use parts of the old system to take us to the new; we will have to use parts of the old system to protect us from other parts of the old system.

There is a place for all of us in this work, and in this transition. But there isn't necessarily a place for the things that made us what we believed ourselves to be – our skills, the things we produced, our learned place in society, our expectations for what life would be.

Still, that can be – I want to say freeing, but I don't mean bringing freedom to us as atomised individuals, to each do as we think fit. I mean rather, that this knowledge can bring us agency to act through our relation and our mutual dependence. And that is why I feel that the path of love is the path to love.

Dalí painting _The Face of War_

I originally wrote this (in 18 minutes!) as a stream-of-consciousness Mastodon thread. Thought it might be worth putting it all together here though.

“Cognitive task” is an ontological sleight-of-hand used to obscure the distinction between the way a human would perform the task, and the nature of the task itself. This mask is then used to conflate human cognition with what neural networks do, when in fact neural networks only work similarly to a small subset of animal cognition.

For example, doing arithmetic is a “cognitive task” for humans, but nobody (or very few) would argue that a calculator doing the same arithmetic is using cognition to do so.

The thing is, animal cognition is inextricably an embodied process. Affect is not a side-effect of cognition but its root.

The fact that we have computerised the production of plausibly similar outputs as those from animal cognition only means that we anthropomorphise the process that produces those plausible outputs. We wrongly assign intention and goals to AI models like LLMs because we incorrectly assume the nature of their insides based on their outsides.

It is meaningless to talk of AI goals or intent, or at least meaningless to think of them as in any way isomorphic to animal goals or intent, as the mechanism for the production of goals and intent fundamentally does not exist in AI models.

This false theory of cognition is extremely dangerous, because it leads us to waste time on fallacies like AGI/superintelligence wiping out humanity through some misplaced intent + agency. In reality the risk is both more proximate and more mundane than that, and is the same risk that has been playing out for at least hundreds of years.

We have repeatedly demonstrated our willingness to deploy technologies whose socioeconomic impact we do not understand and cannot forecast, in order to obtain a profit.

The AI apocalypse looks much more like an accelerated runaway-IT problem: replacing components of complex socioeconomic infrastructure (that might have previously been driven by people or technology) with AI will cause massive damage.

This damage will come from the unpredictable failure modes of systems that depend on certain kinds of AI, that in a context of complexity will cause harmful ripple effects.

The damage will be exacerbated by (1) the continued substitution of software for people in decision-making where there is an incentive to delegate accountability to a system that can't be questioned, and (2) the proliferation of software problems that are impossible to diagnose and impossible to fix.

The good news about this understanding of the AI apocalypse is that we are not fighting against an emergent superior machine intelligence. We are only fighting the dumbest, greediest instincts our human society produces. And that is something we know how to do.

Happy weekend!

How instead can we imagine what is to come, prepare for it, while also refusing its normalization? We are shifting into a future for which we are biologically ill-adapted, a situation we have created for ourselves and that we now have little hope of reversing. How then do we move into the horror? What does “survival” look like? Is it just doomed persistence? What are the moral parameters of these futures? What is right action when all hope is lost?

We should design for moral coherence. If we design for anything, our designs should aim to bring our habits and actions into line with the values we truly hold, or perhaps more accurately, the values we wish we were able to hold.

“The type of self I am capable of being is incredibly constrained by my context.” – Taylor Guthrie, https://overcast.fm/+AA2tlVFUsQs/45:28

We have such a poverty-stricken understanding of what life is, that when we create new technologies that replace living processes, we can’t even comprehend what it is that the technology replaced.

It’s tragic that ideas such as “harmony” and “flourishing” are mostly now written off as feel-good eastern mysticism, or even worse, new-age woo, even while western science itself is only just beginning to discover them for itself.

That mycorrhizae are essential to soil ecosystem health and thereby fertility, and that trees are essential to the presence of mycorrhizae, are now accepted in botany and ecology, is still irrelevant to agricultural industries that look at soil as a porous medium for the conversion of seeds into crops by the addition of nitrogen compounds produced with unimaginable quantities of fossil fuel. That people have, for thousands of years, had practices that keep soil alive, able to hold water, and a fertile bed for the cultivation of plants without the help of the Haber-Bosch process, is irrelevant to those who see feeding 10 billion humans in a dangerously warming world as an engineering challenge.

The problem with ancient practices, of course, in a modern world, is that the processes that ancient practices supported are no longer the means by which industrial humans make their living in the world. Our connection to the physical is largely mediated by machines – machines that cause or require the destruction of natural systems and traditional practices. We have evolved our machines to have effects at ever-larger scales, and in so doing have pushed living systems, which exist in definite and local places, to the margins. All of nature is grist to the mill.

Well, if we have evolved technologies with global footprints, surely we should have evolved practices of care with global footprints too? But of course, no – the very cast of mind required to produce these kinds of machine is that which devalues care and other such old-fashioned inclinations. We act as if, as long as we produce enough, efficiently enough, we can outrun the need for care. We just need to keep designing new artificial components of the pyramid of human technology to replace the few remaining natural systems that are keeping us alive, as, one by one, they give up the ghost.

Eric Liddell (right)

Adrian Bejan’s Constructal Law is something I keep thinking about. It states:

For a finite-size flow system to persist in time (to live), it must evolve in such a way that it provides easier access to the currents that flow through it.

Much of Bejan’s investigations have been about documenting instances of the Constructal Law in natural processes; it’s an elegant theoretical basis for predicting the allowed evolutionary pathways that ultimately produce the natural and organic structures that we find beautiful.

Now, not to take too enormous a logical leap, but, as I have started to argue elsewhere (and would like someday to finish that argument), I believe that beauty is more than an aesthetic experience of highly-intelligent beings. Rather, it’s a relational encounter between parts of a system that is evolving, as the Constructal Law states, towards more ease-of-flow. The experience of beauty is a retrospective signal that the system is evolving in such a way that allows it to persist in time; that is, to live.

This interpretation implies a profound truth—that when we perceive beauty, we are recognizing the conditions of our flourishing. If we pay attention, we can understand how to nurture and tend to those conditions and add to the sum total of aliveness of our world.

Many of my friends, being in the world of software development, are very aware of the architect Christopher Alexander, from his development of the idea of pattern languages. Most of them have not encountered his later work, The Nature of Order—a four-volume magnum opus that laid out a theoretical framework for what he called “aliveness” or just “life” in designed structures. In this work, he described 15 properties of living structure. Until I read about the Constructal Law, I found myself unable to relate to Alexander’s use of the word “life”—I knew that he meant it as more than just hyperbole, but I couldn’t quite grasp why these properties in particular were seen in so-called “living structure”.

I now understand it as follows: Alexander was throughout his career describing design processes that were iterative and evolutionary, where the resulting structure was to belong harmoniously, and to “help” its local environment. Because he was an architect, he mostly addresses the design processes leading up to the creation of a building or a campus. But he also always talks about how a building or an object evolves in use, and evolves the functions of the space it inhabits. These processes, to me, are analogous to what Bejan describes in the Constructal Law: the building evolves, in use, always to provide “easier access to the flows”.

To Alexander, beauty was also a sign of this aliveness. I now understand that his 15 properties of living structure can be seen as the results of Constructal evolutionary processes, and that he and Bejan were barking up the same tree.

Connecting these dots has given me a lot of hope and peace; moreover, it has given me an aesthetic heuristic to understand if things are going well for a system, whether that be a process in my workplace, a relationship, or the broader arc of my life. Do I perceive beauty and ease unfolding? If yes, what conditions are supporting the unfolding of that beauty? If no, what would be beautiful that could evolve from here? What conditions would support that?

[God] made me fast. And when I run, I feel His pleasure.  — Eric Liddell, in Chariots of Fire

Life is supposed to be full of beauty; that is what it seeks. Be a vessel for that beauty to come into the world.

This is just a quick post to say I've turned on email subscriptions to this little journal. It's there in case you want my ramblings in your inbox, obviously. The signup form is at the bottom of every page.

Cheers!

I keep re-learning about conditions versus consequences. In a conditions-first model, we create the conditions that make the next step possible. The next step unfolds out of the first step. This is the recursive model of momentum – we are continually creating the conditions for the natural unfolding of the next thing.

The standard model of productive/creative momentum is the iterative model, which is the opposite of the recursive model. In the iterative model, we do one thing on the list, then we do the next thing. Stringing together done-things-on-the-list is the game. This is fine unless you are one of those strange people who needs access to a deep pool of motivation for every single stupid task you are trying to get your stupid brain to do. Then, the iterative model can break, as it assumes that motivation is either intrinsic to each task, or comes from somewhere outside of the task.

At the moment, I seem to be one of those people to a sometimes pretty pathological level. So I need this task to be energised by a powerful motivation, and I need this task to naturally create the powerful motivation for the next one.

I just want to be able to think clearly and express what I’m thinking. It seems I’m great at finding relevant wisdom when I have something to react to — a foil for my spirit? — but without that, the mother of all blank canvases.

Perhaps some good generative pretexts:

  1. Imagine someone coming to me for advice. What do they ask? What are they really asking? How would I help them to think about their problem? What advice would I give them?
  2. Create something just good enough to react to. Then recurse – create a reaction to that thing, which in turns spawns another reaction. Recursive momentum rather than iterative momentum.

Enter your email to subscribe to updates.