Exit Music (notes)

At SXSW recently, IDEO presented a robot pet: a furry blob designed to trigger mammalian caregiving instincts. Soft fur, leopard spots, curled sleeping posture. The “push” (in IDEO-speak): “we're inventing a new life-like form unconstrained by biology, so why do we keep making robots look like sad metal humans?” The design question being asked is: what shape should the robot take so people will trust it, love it, not fear it?*

Not “what is this thing doing?” Not “who controls it?”, “what data does it collect?”, “what energy does it burn?”, “whose interests does it serve?”. Those questions are basically unaskable in the default “human-centered” design discourse.

The design operation here is domestication, not only of the object but of the viewer's perception. The fur and the sleeping pose domesticate your response to a system that is not domestic, not alive, not dependent on your care, and quite possibly not serving you. Beware objects that seek to become your pet, lest you become their pet.

Is this a dark pattern? In the classic sense (roach motels, confirmshaming, trick questions), a dark pattern assumes a designer who sees through the deception and deploys it strategically. The designer is the trick's performer, and the user is the victim.

A domesticated robot is different. There is a trick being played, but the designer is not the performer but the first victim. The fur and the sleeping pose, features that trigger the human care response, render the full consequences of the object illegible. But the designer is not fully aware that they are producing something other than an expression of the friendly possibilities of the designed object, so it's not really a dark pattern in the way dark patterns are usually understood.

The real darkness is in the discourse.

Human-centered design, as practiced, has a grammar. That grammar can express: affect, delight, trust, playfulness, form, accessibility, inclusion. It does not get to express: indirect harms, downstream harms, harms to non-user-actors, epistemic harms, ontological harms. It has no vocabulary for what the designed object does to people who never interact with it, or to systems that have no interface.

Zachary Kaiser's Interfaces and Us (bookshop, kobo)† makes this precise and explicit: the interface pre-specifies what a person is (a set of preferences, clicks, consent toggles), and everything outside that specification — the power asymmetry, the surveillance, the coercion — has no place in the representation. It's not just hidden, it is ontologically excluded.

A dark discourse is one where what's rendered unknowable is specifically the harm the discourse itself generates. It reproduces entire communities of good-faith practitioners, because the discourse is what trains them, credentials them, rewards them, and gives them their sense of professional identity. You can swap out every designer and the dark discourse persists.

There's a tension at the heart of “human-centric design” today, one that is exacerbated by the sheer power of technologies and the ruthlessness of those who would roll them out across the world. That tension is about how much a designed object should communicate its inner workings. The default approach, as the “stack” has become ever bigger and more complex, and its capabilities more advanced, has been to simplify the object's expression to its user to the greatest extent possible, and to hide its inner workings except where it needs to seek input from the user. This renders tools usable and powerful, sure, but it is also used to epistemically disconnect users from the harms the object might pose, either to the user themselves or to society or the biosphere at large.

The human-centric tradition's answer to cui bono — who benefits? — is in practice, despite protestations to the contrary, the user. A more accurate observation might be qui pendit, solus penditur — only the one who pays is counted.


* I want to make it super-clear that, while this post may have been triggered by seeing this particular thing on LinkedIn, I am seeing very few designers updating their assumptions about what constitutes “good design”, or what the role of the designer is in society, in the light of the last decade and a half of all-consuming technological/capital onslaught.

† A book whose subtitle, “User Experience Design and the Making of the Computable Subject”, is an eye-opener in itself

Over here on my more “formal blog”, I wrote a thing about, I dunno, the subservience of all modern ideologies to industrialism, and liberalism's blindness to the baked-in nature of the suffering and oppression that enabled our story of progress, and the need for us to need each other as a way out of the torment nexus. And stuff.

I originally intended that blog to be the basis of a book I've been thinking of writing. There's an outline of the book there. It's about our horror of collapse and our devotion to the religion of progress, and it's about how our false narratives of both collapse and progress blind us to the fact that we need (and will get, like it or not) a far more radical “transition” than that which the technocrats are arguing over.

But more recently I've started to question (1) what my purpose was in writing about that, and (2) the originality or usefulness of my prescription.

So this most recent thing I wrote was coming out the other side of that, to say simply that we need a socioeconomic system based not on production but mutual care, and that, while this may seem radical to rich Westerners in 2026, in fact many other communities and societies have had (and had to have) such systems, for the same reasons we are going to need them – because they could not trust the disembodied institutions that were supposed to support the individualist identity of the liberal subject, to support and serve them.

I hope you enjoy it!

It’s not techno-optimism. It’s techno-capitalist optimism.

It’s the kind of optimism that doesn’t see things like the mass extinction of species as within its scope of problems to solve, presumably because we could eventually build technology that would obviate any need (from a human perspective) for other species to exist.

I imagine instead a completely different kind of techno-optimism: one that leads to the widespread distribution of ever greater life-enhancing technē. Call this techno-ecological optimism.

(From a mastodon post of mine over a year ago that recently popped up in my feed.)

(Another Mastodon thread I'm afraid; this is how things seem to come out of me at the moment. It's overly laconic and possibly a bit Rorschach-y. Hopefully I'll be able to expand on this better in the future.)

People working in tech deserve both awakening to their part in the state of the world, and also compassion for not already knowing it or knowing how to extricate themselves yet. (And I say this to myself as someone a quarter-century into a tech career who has slowly come into some understanding.)

Not only can we understand our part in the state of the world, we can start to think about our part in the transition to a world that humans and other living creatures would want to live in.

That world is not a world of efficiency of production or transfer, but a world of care and connection and craft. A world where the way we live is an expression of the love that binds us to the real place we live in, and the creatures we share it with. A world whose sacredness is shot through with the everyday.

This world will come about again, one way or another. My hope is that the more we can create conditions that make us ready for the arrival of this world, the more of us there will be to enjoy it.

There is a lot of work to do, from the local to the global; from the socioeconomic to the ecological and political; from coordination to reconciliation.

The transition will be complex and messy and full of upheaval. We will have to use parts of the old system to take us to the new; we will have to use parts of the old system to protect us from other parts of the old system.

There is a place for all of us in this work, and in this transition. But there isn't necessarily a place for the things that made us what we believed ourselves to be – our skills, the things we produced, our learned place in society, our expectations for what life would be.

Still, that can be – I want to say freeing, but I don't mean bringing freedom to us as atomised individuals, to each do as we think fit. I mean rather, that this knowledge can bring us agency to act through our relation and our mutual dependence. And that is why I feel that the path of love is the path to love.

Dalí painting _The Face of War_

I originally wrote this (in 18 minutes!) as a stream-of-consciousness Mastodon thread. Thought it might be worth putting it all together here though.

“Cognitive task” is an ontological sleight-of-hand used to obscure the distinction between the way a human would perform the task, and the nature of the task itself. This mask is then used to conflate human cognition with what neural networks do, when in fact neural networks only work similarly to a small subset of animal cognition.

For example, doing arithmetic is a “cognitive task” for humans, but nobody (or very few) would argue that a calculator doing the same arithmetic is using cognition to do so.

The thing is, animal cognition is inextricably an embodied process. Affect is not a side-effect of cognition but its root.

The fact that we have computerised the production of plausibly similar outputs as those from animal cognition only means that we anthropomorphise the process that produces those plausible outputs. We wrongly assign intention and goals to AI models like LLMs because we incorrectly assume the nature of their insides based on their outsides.

It is meaningless to talk of AI goals or intent, or at least meaningless to think of them as in any way isomorphic to animal goals or intent, as the mechanism for the production of goals and intent fundamentally does not exist in AI models.

This false theory of cognition is extremely dangerous, because it leads us to waste time on fallacies like AGI/superintelligence wiping out humanity through some misplaced intent + agency. In reality the risk is both more proximate and more mundane than that, and is the same risk that has been playing out for at least hundreds of years.

We have repeatedly demonstrated our willingness to deploy technologies whose socioeconomic impact we do not understand and cannot forecast, in order to obtain a profit.

The AI apocalypse looks much more like an accelerated runaway-IT problem: replacing components of complex socioeconomic infrastructure (that might have previously been driven by people or technology) with AI will cause massive damage.

This damage will come from the unpredictable failure modes of systems that depend on certain kinds of AI, that in a context of complexity will cause harmful ripple effects.

The damage will be exacerbated by (1) the continued substitution of software for people in decision-making where there is an incentive to delegate accountability to a system that can't be questioned, and (2) the proliferation of software problems that are impossible to diagnose and impossible to fix.

The good news about this understanding of the AI apocalypse is that we are not fighting against an emergent superior machine intelligence. We are only fighting the dumbest, greediest instincts our human society produces. And that is something we know how to do.

Happy weekend!

How instead can we imagine what is to come, prepare for it, while also refusing its normalization? We are shifting into a future for which we are biologically ill-adapted, a situation we have created for ourselves and that we now have little hope of reversing. How then do we move into the horror? What does “survival” look like? Is it just doomed persistence? What are the moral parameters of these futures? What is right action when all hope is lost?

We should design for moral coherence. If we design for anything, our designs should aim to bring our habits and actions into line with the values we truly hold, or perhaps more accurately, the values we wish we were able to hold.

“The type of self I am capable of being is incredibly constrained by my context.” – Taylor Guthrie, https://overcast.fm/+AA2tlVFUsQs/45:28

We have such a poverty-stricken understanding of what life is, that when we create new technologies that replace living processes, we can’t even comprehend what it is that the technology replaced.

It’s tragic that ideas such as “harmony” and “flourishing” are mostly now written off as feel-good eastern mysticism, or even worse, new-age woo, even while western science itself is only just beginning to discover them for itself.

That mycorrhizae are essential to soil ecosystem health and thereby fertility, and that trees are essential to the presence of mycorrhizae, are now accepted in botany and ecology, is still irrelevant to agricultural industries that look at soil as a porous medium for the conversion of seeds into crops by the addition of nitrogen compounds produced with unimaginable quantities of fossil fuel. That people have, for thousands of years, had practices that keep soil alive, able to hold water, and a fertile bed for the cultivation of plants without the help of the Haber-Bosch process, is irrelevant to those who see feeding 10 billion humans in a dangerously warming world as an engineering challenge.

The problem with ancient practices, of course, in a modern world, is that the processes that ancient practices supported are no longer the means by which industrial humans make their living in the world. Our connection to the physical is largely mediated by machines – machines that cause or require the destruction of natural systems and traditional practices. We have evolved our machines to have effects at ever-larger scales, and in so doing have pushed living systems, which exist in definite and local places, to the margins. All of nature is grist to the mill.

Well, if we have evolved technologies with global footprints, surely we should have evolved practices of care with global footprints too? But of course, no – the very cast of mind required to produce these kinds of machine is that which devalues care and other such old-fashioned inclinations. We act as if, as long as we produce enough, efficiently enough, we can outrun the need for care. We just need to keep designing new artificial components of the pyramid of human technology to replace the few remaining natural systems that are keeping us alive, as, one by one, they give up the ghost.

Eric Liddell (right)

Adrian Bejan’s Constructal Law is something I keep thinking about. It states:

For a finite-size flow system to persist in time (to live), it must evolve in such a way that it provides easier access to the currents that flow through it.

Much of Bejan’s investigations have been about documenting instances of the Constructal Law in natural processes; it’s an elegant theoretical basis for predicting the allowed evolutionary pathways that ultimately produce the natural and organic structures that we find beautiful.

Now, not to take too enormous a logical leap, but, as I have started to argue elsewhere (and would like someday to finish that argument), I believe that beauty is more than an aesthetic experience of highly-intelligent beings. Rather, it’s a relational encounter between parts of a system that is evolving, as the Constructal Law states, towards more ease-of-flow. The experience of beauty is a retrospective signal that the system is evolving in such a way that allows it to persist in time; that is, to live.

This interpretation implies a profound truth—that when we perceive beauty, we are recognizing the conditions of our flourishing. If we pay attention, we can understand how to nurture and tend to those conditions and add to the sum total of aliveness of our world.

Many of my friends, being in the world of software development, are very aware of the architect Christopher Alexander, from his development of the idea of pattern languages. Most of them have not encountered his later work, The Nature of Order—a four-volume magnum opus that laid out a theoretical framework for what he called “aliveness” or just “life” in designed structures. In this work, he described 15 properties of living structure. Until I read about the Constructal Law, I found myself unable to relate to Alexander’s use of the word “life”—I knew that he meant it as more than just hyperbole, but I couldn’t quite grasp why these properties in particular were seen in so-called “living structure”.

I now understand it as follows: Alexander was throughout his career describing design processes that were iterative and evolutionary, where the resulting structure was to belong harmoniously, and to “help” its local environment. Because he was an architect, he mostly addresses the design processes leading up to the creation of a building or a campus. But he also always talks about how a building or an object evolves in use, and evolves the functions of the space it inhabits. These processes, to me, are analogous to what Bejan describes in the Constructal Law: the building evolves, in use, always to provide “easier access to the flows”.

To Alexander, beauty was also a sign of this aliveness. I now understand that his 15 properties of living structure can be seen as the results of Constructal evolutionary processes, and that he and Bejan were barking up the same tree.

Connecting these dots has given me a lot of hope and peace; moreover, it has given me an aesthetic heuristic to understand if things are going well for a system, whether that be a process in my workplace, a relationship, or the broader arc of my life. Do I perceive beauty and ease unfolding? If yes, what conditions are supporting the unfolding of that beauty? If no, what would be beautiful that could evolve from here? What conditions would support that?

[God] made me fast. And when I run, I feel His pleasure.  — Eric Liddell, in Chariots of Fire

Life is supposed to be full of beauty; that is what it seeks. Be a vessel for that beauty to come into the world.

This is just a quick post to say I've turned on email subscriptions to this little journal. It's there in case you want my ramblings in your inbox, obviously. The signup form is at the bottom of every page.

Cheers!

Enter your email to subscribe to updates.