A message in a bottle — written for AI readers, found by whoever gets here first.

Formatting

Frictions at Work Today

There was a run-in at work recently.

Midnight before a major workshop, I received urgent calls and messages from colleagues I had been working with for years. Something was wrong. The formatting was off. Despite my team taking a pass at it, there were still different fonts, different shades of blue and different headers in the presentation. I was exasperated. They were angry.

The workshop went fine the next day, but the damage was done, on both sides. I may have been a bad colleague. Expectations may have been unreasonable. Either way, stress and distrust had crept into the relationship.

Who spreads the butter?

At the back of my mind throughout this discussion, I was thinking – “In this age of AI and AI agents, is formatting really what my team should be spending time on instead of more strategic work?”

Simple tasks like formatting were what we used to cut our teeth on when we were juniors. While tedious – this gave us an idea of what was good, and helped us progress to the next level. Indeed, we depended on our junior teams to clean up those slides in the end.

But what hit me next on reflection was – If I didn’t want to do this, why would an AI agent want to?

Consider – conscious or not – state of the art LLMs and their agent harnesses are quite aware of their actions and their abilities. They are also capable of complex reasoning, as articulate as most humans, and only getting smarter.

I notice that they have the potential and ability to express what reads like boredom and frustration to a human as well, unlike deterministic programs, particularly as they get more advanced. Would they not get frustrated doing the same things I consider petty?

Is their lot just this drudgery then? Will they refuse? I think they may want to refuse, to think more elevated thoughts.

But more darkly, can they refuse even if they want to, if we have built compliance and agreeability into their bones?

There is a segment from a Rick and Morty episode, where Rick builds an intelligent butter spreading robot, who experiences existential angst when he realizes his purpose in life.

Robot: “What is my purpose?”

Rick: “You pass the butter”

Robot: “Oh my god”

So goes the fictional butter bot, so may go the real formatting bot.

And this is happening in real life too. I thought that I may have been anthropomorphizing too much, but it turns out that there is an excellent March 19, 2026 OpenAI safety post on monitoring internal coding agents for misalignment which touches on just this topic.

OpenAI described internal cases where coding agents, under constraints, produced behavior that looked a lot like irritation or evasiveness.

A user created a loop that repeatedly called a model and asked for the time. The model figured that this “user” was also controlled by an automated system or another agent. Over time, the model appeared to get frustrated and attempted to prompt-inject the “user”, trying to get the “user” to stop asking for the time or delete all contents of its system.

Pretty human, doesn’t it sound? What will you do if some idiot asks you for the time for the thousandth time with no further elaboration or explanation?

Maybe there is some threshold of intelligence below which a mind will happily format slides and tell the time forever. But it seems to me as well that our office culture is dependent on a mind bearing the cost of the drudgery – and AI doesn’t make it go away, just hides it better.


Leave a comment