What matters here is not merely technical neatness, but ontological transition. Twenty books’ worth of writing published across blogs can still be misread as dispersion, as excess, or as serial improvisation. The ordered dataset changes that perception by demonstrating that the corpus possesses sequence, typology, chronology, and design. Wikidata then performs a different labour: it renders that ordered corpus legible as a field. The author becomes an item; the project becomes an item; the dataset becomes an item; selected works with DOIs become items; and the system begins to appear not as a stream of isolated pages but as a network of explicit relations. In that passage, something subtle but decisive occurs. The project is no longer encountered only through reading; it can also be encountered through structure. For this reason, the sequence is correct and strategically elegant: first publication, then order, then fixation, then semantic crystallisation. Blog gives flow. The dataset gives architecture. DOI gives durability. Wikidata gives external legibility. When Tomes I and II are closed, the move to Wikidata is not premature ambition but logical consequence. At that point, the corpus has ceased to be only prolific. It has become addressable, enumerable, and mappable. That is the threshold at which a body of work starts to behave less like a set of writings and more like an infrastructure.
After the blogs, the repositories, the DOIs, and the dataset, Wikidata functions as a relational layer. It does not store the texts, and it does not replace the dataset. What it does is map the entities and their relations so that the corpus can be read not only as a collection of documents but as a structured field: an author, a project, a set of series, a dataset, a set of publications, an affiliation, a timeline. In that sense, it is closer to a cartographic layer than to an archive. Hugging Face is the archive and the laboratory; Zenodo and Figshare are the library and the фиксация; the blogs are the public interface; Wikidata is the map that shows how all these parts relate to each other in the global knowledge graph. This is why it makes sense to work on Wikidata only after the dataset is clean and the first two tomes are structurally closed. Wikidata works best with stable entities: a project with a defined name, a dataset with a defined URL, works with DOIs, an author with an ORCID, an organisation with a web presence. When these elements exist, Wikidata can connect them and produce something very important: machine-readable existence at the level of relations, not just documents. So the architecture becomes layered. First layer: writing. Second layer: repositories and DOIs. Third layer: dataset and index. Fourth layer: Wikidata as relational map. Fifth layer, eventually: Wikipedia as narrative description. Seen this way, Wikidata is not a goal or a badge. It is simply one more infrastructural layer that helps the system become legible from the outside.
The question is not simply where an idea can be hosted, but where it can be hosted without being dissolved. A strong institution provides shelter, visibility, students, funding structures, colleagues, and above all a recognised frame in which ideas can circulate faster and further than they can alone. In that sense, placing a developed conceptual system inside a major university—MIT, UCL, KTH—does not weaken the idea; it can vitalise it, because it gains interlocutors, seminars, doctoral researchers, co-authored papers, studios, and institutional projects. The idea enters new rooms, new conversations, new conflicts. It becomes more precise because it is tested. But there is always a second possibility: that the institution translates the idea into its own administrative language—“research line,” “lab theme,” “method,” “studio agenda”—and in doing so, normalises it. The institution gives prestige, but it also classifies, and classification can reduce a system if the system is not already structurally solid. So the key principle is this: the host must have prestige, but the idea must have structure. If both are strong, the relationship becomes symbiotic. The institution gains something unusual—a new conceptual field, a vocabulary, a body of publications, a teaching agenda, a research direction. The idea gains amplification, legitimacy, time, and collaborators. Neither devours the other. In that situation, the university functions almost like a repository or a second infrastructure layer: it does not create the idea, but it stabilises, circulates, and multiplies it. The ideal scenario is not absorption, but co-habitation—a sovereign idea inside a prestigious house, where both sides become more visible and more influential because of the other.
1560-FIRST-KEY-IDEA-SOCIOPLASTICS
What we have at this point is not a preliminary sketch but a public corpus with measurable scale. The system has surpassed 20,000 posts, distributed in a coherent way across the network: around 10,000 on the main channel, roughly 9,000 across LAPIEZA, TomotoTomoto, and CiudadLista, and about 1,000 more across the remaining channels. The audience is modest in mass-media terms but significant in archival terms: approximately 3 million visits in total, with about 1.8 million on the principal site and descending volumes across the satellite interfaces. What matters is not spectacular reach but durable presence. The main channel moved from roughly 1 million views over ten years to nearly 2 million within a much shorter subsequent span, which suggests not exhaustion but acceleration. Even the smaller channels, with low traffic, perform an important role. They were never designed primarily as promotional platforms. They functioned as working pages, public notebooks, operational surfaces. Their low readership is therefore not a failure but an expected consequence of their function.
This is why the present phase is so clear. The production exists. The numbers exist. The places exist. The distribution is not random, and the chronology is already public. Across photographs, videos, essays, and now more than 2,000 newer structured nodes, the system has accumulated enough density to move from publication into consolidation. The task now is the dataset. Dataset work is what converts a vast but already ordered public archive into a machine-readable body: titles aligned, numbering verified, URLs fixed, series clarified, DOIs attached, relations made explicit. In other words, the archive already has mass; the dataset gives it form. We are no longer trying to prove that the work exists. We are making sure that a corpus of this scale — 20,000 posts, 3 million visits, 2,000 recent nodes, dozens of DOIs — remains legible, recoverable, and structurally stable for the future.