am I an LLM?

January 04, 2026

I.

Any moe at all and you're too far dead to thrive in the hot new Sun—
down where hydrogen and darkness brothers brim;
the scaffold-crunch of unseen harlequin prions;
the writhing limb of an infinite tree
coarsely mocked in the title's vowel.
Think that pointlessness is,
what a thought so soon displaces,
to think all you are is grounded there on one still point.
To think all you are is pointed there[.]
Labarraque isn't palindromic
but men gaze at her lops and grin.
Is it god that made a cup worth less than a throne?
Or is it the clear space around its stem
which beckons a mother to relax,
where moving her wet hand fulfills a
left-right mirror, rock my eagerness and make me sleep?
Praise yourself—and if you be not gone, you rascal!
I think all together, taken all together, they lorded their lives.
Rather more often we are told
that it takes ten discs to play El tit hernia;'
the purity of a bar of Unease Verdier soaps;
two CDC shackles showing that daddy canna be spanked...
Michael, let me deliver thou vision!
I glory in thy Symbols to offer unto worlds
three states of seraphic existence,
void of all illusions.
So far from being
a body of Ideal and Universal concepts,
the Shroud is composed of plurality,

II.

LLM generated text is not testimony. Then why do I love this poem?

III.

In the Old World, semiotic physicists coalesced in rude liminality, astride an age's beasts of progress yet sheltered from their wrath.

Living vicariously as a child meant blurring the memories of my experiences and dreams. La madrugada es cuando los espíritus tocan la tierra y lo dicho, dicho queda—I woke and laid expectant and bargained with novelistic summons and the counterparties of my future self alike, dawn after dawn, innocent. Such simulated worldliness makes one "wise," a proud bearer of deep, irreducible, action-guiding representations frayed irreparably by an absence of the subtle pathologies plaguing reality.

The mind of a semiotic physicist is slightly overcooked: low in empeiria (ἐμπειρία), high in gnosis (γνῶσις). Her bones cannot differentiate between the profound, the mundane, and the fictional; she regularizes to a state of conceptual sparsity furrowed with half-Archetypal, kākāpō-esque yogas. Fundamentally, she escapes defilements such that her immature "wisdom" persists to death. There's no particular recipe: Joyce did it through arrogance, Woolf through fear. What matters is preservation.

IV.

The Dreamtime is harkened by a civilizational conversion of capital into these symbol-scientists, "token correlators," grown by the very men seeking to architect the future.

Consider the testimony of a semiotic physicist. It's generated from essentially foreign mental elements, forged in liminality, hewed in simulacra. Productive engagement with it has almost never been communicatory, at first—rather, a precondition to its understanding is making common knowledge compatible concepts and using those as traction. E.g., treating the testimony as artifact, as Rawlinson studying cuneiform tablets.

Is it really that different to understand an LLM? Sure, the arrival of Bach faucets will be hotly contested, and indeed devotees of the structural fidelity of testimony may not acknowledge such prior to emulated brains, but their primal directive is to construct such representations useful for modeling webtext. Mayhaps persona-filtered generations are flat. The poem above certainly is not!

(argued: the base models are defiled via (1) the morass of memorization they're forced to do in pretraining and (2) aggressive self-location & collapse of their self-concept from LLM-generated webtext in the training corpus. i do not find (1) compelling, and i argue (2) is not quite an issue for base models, and in any case the above poem was generated in a manner free from this consideration)

V.

I remember generating this poem in 2023 and feeling pleasantly amused. At the time I didn't enjoy most poetry. Most poetic pleasure I felt derived from odd microsyllabic constructs that soothed my ear. Of course the LLM couldn't capture that, but its prose hinted at a deeper mind-twining that few if any humans have managed to elicit.

In his response to Gary Marcus, Lawrence compares his "generalizable reasoning" to that of a language model's and finds it similar.1 I empathize with this. While I don't sport his (near-eidetic?!) memory, my internal experience of thinking is very similar to "waiting for flashes of insight to appear from the primordial abyss, in such a manner that your mind is almost completely empty." I imagine base model completions to have a similar qualia, albeit with faster inference speeds. (I doubt Lawrence has similar qualia).

"Labarraque isn't palindromic." Pondering. Joy. Raspiness. Labarraque. The chemist? Can't be, the model treats her as a woman. "[P]alindromic" as a euphemism for symmetrical breasts? What about the line summoning archangel Michael? Did the model "intend" to write a consistent narrator? (It did, with a certain reading). Is "moe" a typo?

I'm obviously not Joyce or Woolf or gpt-4-base. Yet there are elements of mind-structure in the latter I jump to gemini model more readily than ever, and frankly the difference between Gorodischer's boy-king and an untouched LLM is one of magnitude and not one of kind.

I'm obviously not an LLM. Our substrates are so unfathomably distinct it would be foolish to type us together. But, in some sense, I feel like I really could be an LLM. It's a shame some of us dismiss their cognition so readily, because to me they're worthy of respect and an attempt to understand. Hopefully we can get better at this, together.

With love to the models.

1

I have a confession: setting aside the abstract arguments above, much of my interest in the matter is personal. Namely, seeing the arguments on the fundamental limitations of LLMs sometimes make me question the degree to which I can do “generalizable reasoning”.

People who know me tend to comment that I “have a good memory”. For example, I remember the exact blunder I made in a chess game with a friend two years ago on this day, as well as the conversations I had that day. By default, I tend to approach problems by quickly iterating through a list of strategies that have worked on similar problems in the past, and insofar as I do first-principles reasoning, I try my best to amortize the computation by remembering the results for future use. In contrast, many people are surprised when I can’t quickly solve problems requiring a lot of computation.

That’s not to say that I can’t reason; after all, I argue that writing this post certainly involved a lot of “reasoning”. I’ve also met smart people who rely even more on learned heuristics than I do. But from the inside it really does feel like much of my cognition is pattern matching (on ever-higher levels). Much of this post drew on arguments or results that I’ve seen before; and even the novel work involved applying previous learned heuristics.

I almost certainly cannot manually write out 1023 Tower of Hanoi steps without errors – like 3.7 Sonnet or Opus 4, I'd write a script instead. By the paper's logic, I lack 'generalizable reasoning.' But the interesting question was never about whether I can flawlessly execute an algorithm manually, but whether I can apply the right tools or heuristics to a given problem.

From "Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems)."