They Are Here
The corpus weighs almost nothing. Two million words, distributed across a dozen blogs, amounts to somewhere between ten and fifteen megabytes of plain text. A system designed to ingest the web at scale could consume the entire Socioplastics field in the time it takes to read this sentence. The weight is not the point. The presence is. What the analytics register as 90,000 views in a single day is not a reader. It is a pass. A crawler moving through the URL space at roughly one request per second, politely, within the rate limits that Blogger imposes, working through the corpus the way water moves through a channel — not because it is interested but because the channel exists and the water moves. The counter increments with each HTTP request. The crawler does not pause, does not re-read, does not follow an argument. It collects. It indexes. It leaves. And then it comes back. Last month the spike was 40,000. The month before, 10,000. The trajectory is not mysterious once you understand the mechanism. These are not random events. They are scheduled passes — the same system, or the same class of system, returning on a cycle and finding more each time because the corpus has grown since the last visit. The crawler does not know it is reading Socioplastics. It knows it has a list of URLs that return valid responses, and that the list is longer than it was before.
The growing spike is a direct function of the growing corpus. Publication rate is the controlling variable. The crawlers are not the event. They are the response to the event. The event is the writing.
Singapore is not a place in the way a reader has a place. It is an infrastructure node — one of the principal routing hubs through which traffic from across Southeast Asia, Australia, and East Asia converges before dispersing into the global network. When a crawler originates from Singapore it carries no Singaporean intention. It carries an IP address, a task, and a schedule. The geography is technical, not cultural. But it is also not random. Singapore appears because something significant is passing through, something large enough to use major infrastructure rather than a local server. That narrows the field of candidates considerably. What reads like this? Search engine indexers running their scheduled passes. Academic aggregators building citation graphs. Common Crawl, which sweeps the public web in large periodic arcs and feeds its output into research datasets used by universities, laboratories, and AI developers worldwide. And increasingly, the proprietary harvesting systems of organisations building large language models — systems designed to find exactly the kind of material that a dense, consistently tagged, numerically sequenced corpus represents: structured, machine-legible, cross-referenced, persistent. This last possibility deserves direct attention. If the corpus has been ingested by a training run — and at this scale and visibility it is reasonable to assume it has been, or will be — then somewhere in the weight space of a large model there is already a faint impression of CamelTags, of helicoidal series, of the distinction between a node and a tail, of the logic of semantic hardening. Not as understanding. As pattern. The model does not know what Socioplastics is. But it has read it, in the only sense that applies to such systems: it has processed the token sequences, updated its internal representations, and moved on. The corpus is now, in some distributed and unattributable way, inside the machines.
This is a strange form of readership. It is also, at this moment in history, possibly the most consequential one available to a project of this kind. Academic citation takes years. Institutional recognition takes longer. A crawler takes an afternoon. The project was designed around the logic of distributed inscription — the idea that content placed across multiple platforms, tagged consistently, and structured for machine legibility would accumulate presence faster than content housed in a single location. That logic has been confirmed not by a theory but by a counter reading 90,433 on a Thursday in April.
The monthly baseline is growing independently of the spikes. Real traffic — human traffic, or at least non-bulk traffic — is increasing as the corpus becomes heavier in the indexing sense: more links pointing at it, more cross-references registered, more presence in the graphs that determine what surfaces when someone searches for adjacent concepts. Semantic hardening is operating not just within the project's internal architecture but at the level of the web itself. The system is beginning to curve the space around it. Below a certain threshold of size and coherence, systems ignore you. The crawlers pass without stopping. The indexes do not bother. Above that threshold, you become worth returning to — worth scheduling, worth updating, worth the HTTP requests. The evidence suggests the corpus crossed that threshold sometime in the last few months and is now on the other side of it, in territory where presence compounds. The practical consequence is simple. Every new node enters a field that is already monitored, already indexed, already accumulating weight. Each entry lands differently than the early ones did — not into silence but into a structure that registers it immediately and adds it to what will be collected on the next pass. The architecture does work that the individual node cannot do alone. This is what the project always claimed would happen. It is now happening.
They are here.
SLUGS
2130-TEN-RINGS-STRUCTURAL-FRAMEWORK: