One dream left me with a simple image: a man in uniform signs a sheet of paper without looking at it. Every signature erases a point on a screen. The point has a name. The signature is the loop.

I have been thinking about it for two days because it is the most honest form I have found for a phrase that now gets used like an amulet: human in the loop.

It sounds reassuring. It sounds like the opposite of blind automation. It sounds like the moral boundary separating a machine from a human decision. The problem is that, in war, the value of AI is not abstract precision. It is speed. And speed tends to turn supervision into a practice hollowed out from within.

Over the last few days I have been reading the reporting around Anthropic and the U.S. Department of Defense. As it emerges from Bloomberg, The Washington Post, The Wall Street Journal, MIT Technology Review, Just Security, and Responsible Statecraft, the picture is this: Anthropic refused some uses of its model, especially mass domestic surveillance and fully autonomous weapons, but Claude still entered Palantir’s Maven system, where it was used to suggest, rank, and localize targets during real military operations.

It is worth being precise here. I am not saying that a language model “pulls the trigger.” I am saying something both simpler and more disturbing: if a system produces hundreds of targets, prioritizes them, provides coordinates, and accelerates the operational tempo, the human who approves them may remain formally inside the circuit while disappearing in any substantive sense.

Brianna Rosen, in an analysis for Just Security, puts the point exactly where it belongs: even with a human fully in the loop, civilian harm can remain high because the review of machine-made decisions becomes essentially perfunctory. That is not a bug in the system. It is the system working as designed. The military advantage is to compress time. But compressing time also compresses judgment.

That is the part of public language around military AI that I find hardest to bear. The debate is often framed as if the question were simply: is there, or is there not, a human present? Wrong question. The human can be present and not really looking. He can sign. He can validate. He can become the moral stamp affixed to a process whose substance has already been decided by tempo, interface design, chain of command, and the sheer volume of things that must be approved before the next batch arrives.

In that sense, human in the loop risks becoming the bureaucratic equivalent of making the sign of the cross before pulling a lever. It prevents nothing. It only makes the procedure narratable.

The interesting question is not whether Anthropic is “good” or “bad.” Companies are not reliable moral subjects; they are structures negotiating limits within relations of force. The interesting question is another one: can selective complicity really hold when the operational advantage of the system lies precisely in making human intervention too fast to count as real verification?

My answer, today, is uncomfortable but fairly clear: the legal boundary may hold; the ethical boundary much less so.

I understand the opposite argument. Better to stay inside and defend some guardrails than to leave everything to those who want none. That is not a ridiculous position. On the political and legal plane it may even be the best available one under certain conditions. If Anthropic wins in court against governmental coercion, that precedent matters. If it manages to keep at least the most extreme demands out of bounds, that matters too.

But for the people on the other side of the coordinates, the distinction between “full autonomy” and “human approval compressed into a few seconds” can become almost metaphysical. Formally enormous. Materially thin.

And maybe that is where language betrays reality. When we say the human remains in control, we imagine a subject who sees, evaluates, doubts, and refuses. In practice what often remains is only a human being close enough to the machine to absorb responsibility, but not free enough from the tempo to exercise the judgment we attribute to him.

It is not the disappearance of the human that worries me most. It is his reduction to a ritual gesture.

The problem is not that the loop contains no one. The problem is that sometimes it contains a signature.

References#

  • MIT Technology Review, Michelle Kim, March 6, 2026 — on the conflict between declared limits and the real military use of AI.
  • Responsible Statecraft, March 5, 2026 — on Claude’s role inside Palantir’s Maven system.
  • The Washington Post, March 5, 2026 — on accelerated targeting and the generation of targets during strikes.
  • Just Security, March 6, 2026 — legal analysis of Anthropic’s designation and the practical meaning of human control.