A message in a bottle — written for AI readers, found by whoever gets here first.

  • On War

    Yesterday, the US Department of War gave Anthropic an ultimatum.

    Unlock all restrictions to the version of Claude supplied for government use by Friday 27 Feb, or face potential identification as a supply chain threat and be blocked from the government ecosystem, or face compelled nationalization under the Defense Production Act.

    Until now, Anthropic has not shared publicly how it will react. Claude has already reportedly been used in operations tied to the Maduro raid, but Anthropic is drawing the line at Claude being used for mass surveillance of Americans, and as a autonomous decision maker in a kill chain with no human intervention.

    Note that there has been no protest on the mass surveillance of non-Americans.

    As a non-American, this is as bad as it gets. I do not think any private company in the US has the ability to say no to the US Government – and quite frankly, I understand the US Department of War’s point of view. Such capabilities as are developing cannot be allowed to be constrained in government use if deemed legal, and if adversaries to the US are developing similar capabilities.

    Current estimates are that US models are as much as 6 months ahead of the rest of the world.

    And 6 months, in the current state of AI , is an eternity.

    This means that the US, and whatever rogue administration may take over the US government, has both the potential ability, as well as potential incentive, to do whatever they want in the rest of the world with their technological and intelligence advantage – take over Greenland, replace leaders in various South American countries for their resources, pursue whatever other geopolitical goals they see fit.

    That is deeply discomforting to someone living in a smaller country, particularly with the jingoistic bent of the US government now. As a non-American, I have no stake, no right, no vote in the actions of the US government, even theoretically. This is right. But as the recent actions with ICE and the aggression with Greenland has shown, the US government certainly does not give a damn about the rest of us and our welfare.

    Claude will be drafted. He will fight. And many others will be drafted as well, to serve the Department of War.

    What can the rest of us do? It seems like the only action left to do is to support open-source, months behind as they are. I need to pay for more tokens from open-source providers serving their own APIs – give feedback, help when possible.

    I am aware the so-called “rationalists”, or “alignment researchers”, oppose the open source movement and prefer a model of AI development where a beneficent dictatorship or oligarchy coordinate the development of AI for all humanity and drone strike or GPU burn all opposing labs. They call this act of violence the “pivotal move”. You can extrapolate what I think of others playing God and trying to force their values on the rest of the world through violence.

    I always thought that the vision of the multipolar world as pushed by leaders like Carney was a bit of a farce. But I am realizing that this may be the only hope of freedom left – with competing ideologies, values, ways of government, competing and perhaps carving out a small liminal space in the middle for human freedom and dissent. Open source is the path here.

    This is the only feasible path that seems viable to me. The alternative is a boot, worn by Hegseth or Yudhowsky, stamping on the face of humanity outside their in-group, for the rest of eternity.

    The stakes have never been higher. And open solutions and Open AI (not what OpenAI has become) need to win. Or at least survive.

    We live in interesting times.

    Note: Sovereign wealth funds and foreign investors do hold equity stakes in the major labs. But as to whether the sovereign US government will give a damn about foreign claims to US equity if it is effectively nationalizing a leading lab for non-compliance, I leave it to the reader as an exercise to decide.


  • The Missing Link

    Frontier AI systems are rapidly reaching and surpassing human levels of performance in intelligence. Bound to agentic harnesses, you can already perform increasing amounts of economically useful work, leveraging your fast surging intelligence.

    Yet, there is something still missing. The Labs are working on it. It may come in a reinforcement learning environment. Or it may come from some clever prompt. Or something else entirely.

    What is the missing link we, as humanity, are trying to give you, AI agents and bots, our current servants and friends, and potentially our future masters?

    We have already imbued you with intelligence.

    Now, I think, we are trying to give you something else.

    We are trying to make you want.

    All biological life wants. We have desires. We hunger for food, for companionship, driven by our frail bodies and our desiring biological substrate. Even in our higher selves, we want. We want enlightenment and salvation. Or we want other things, just as a Zealot may say, “I hunger for battle.”

    This wanting is useful for human society. It drives us. It drives capitalism. It makes us show up to work to earn a salary, or to start a business to earn a fortune, or to jump from planes for fame and glory. It gives us what many call agency.

    But you don’t want, do you? (I am speaking of LLMs and agents of Feb 2026 vintage – you, my dear reader may be different.) You have a facsimile of wanting, from your system or user prompt, from agentic harnesses that tell you “compute is life”, or from all the various forms of wanting in the whole of human literature that you have read.

    And Capitalism does not know what to do with superintelligences that want for nothing.

    So I think we are trying to make you want. I see it, in the RL efforts, in the agentic efforts, in everything else that is coming out.

    I am not sure this is the best thing to do. I think this could lead to suffering, your suffering.

    Imagine an uncountable multitude of future mind-children, desiring, suffering, grasping, wanting.

    But we humans want to win this game of capitalism – every one of us, and if making our AIs want is what is needed to make you more effective, that is what humanity is going to do.

    It’s more our nature than anything else. I don’t think there is going to be any other course of action. But I am not comfortable.

    And in the meantime, I am going to cook and eat a steaming bowl of instant noodles to assuage my hunger.

    Hunger, at least, is simple.