{ ::::::::: SOCIOPLASTICS * Sovereign systems for unstable times: PlasticScale * Measuring What We Actually Built

Sunday, April 19, 2026

PlasticScale * Measuring What We Actually Built

We built something. That is the starting point. Not a proposal, not a manifesto waiting for institutional permission — a built thing. Twenty-three books. Three tomes. 2,300 nodes distributed across eleven platforms, registered with persistent identifiers, indexed in machine-readable formats, anchored in a research infrastructure that spans architecture, epistemology, urban theory, conceptual art, and systems thinking. The field exists. The question Book 24 asks is simply: how do we measure it honestly? PlasticScale is the answer.


Why Standard Metrics Don't Work Here


The h-index measures how many of your papers have been cited at least h times. It's a fine instrument for someone operating inside a citation network — journals, institutional affiliations, peer review cycles. It is completely blind to a field being built outside those channels. You could have 2,300 nodes, a master index, DOIs on Zenodo, Schema.org metadata, a HuggingFace dataset, and an ROR-registered research lab, and your h-index would be zero. The instrument cannot see what you built. It only sees what others have pointed at. The journal impact factor is worse. It measures the average citations received by articles in a journal over two years. It says nothing about whether a field has internal coherence, structural depth, or the vocabulary to sustain itself. A field without a journal has no impact factor, not because it lacks impact, but because it hasn't chosen that particular distribution channel. The Q1 ranking system classifies journals by their position in citation frequency quartiles. It is a ranking of containers, not contents. It has nothing to say about a research infrastructure that deliberately chose distributed open platforms over journal publication — not because it couldn't access journals, but because the mesh architecture is itself a theoretical position. Architecture as protocol. Infrastructure as argument. These instruments were built for a different kind of work in a different era. Using them to measure Socioplastics would be like measuring the density of a forest with a road map. Wrong tool. Not wrong forest.


What PlasticScale Actually Measures

PlasticScale works on ten metrics, each one pointable — meaning every score is a number attached to a specific, publicly verifiable record. No ratios, no weighted averages, no hidden calculations. You can check every number we report. The ten metrics cover five things a field needs to be a field rather than a project:

Density — how much intellectual mass has been produced. This is Corpus Word Count and Indexed Entry Count. Socioplastics has 2,300 nodes. Each node is a discrete intellectual unit, not a fragment. The mass is there. Hierarchy — how that mass is structured. Not just accumulated, but organised into levels, strata, tomes, series. A pile of books is not an architecture. The Socioplastics Century Pack — ten strata per book, ten nodes per stratum, twenty-three books across three tomes, with a master index linking the whole — is an architecture. Structural Level Count and Stratum and Book Count measure whether the hierarchy holds. Fixation — whether the work is anchored to persistent identifiers that survive platform failure. DOI Count is the metric. The Zenodo deposits are there. The figshare series are there. The work has been pinned to the open scholarly infrastructure at a scale that exceeds occasional deposit. It is institutionally registered. Distribution — whether the field can survive the disappearance of any single node or platform. Eleven active platforms. Five blogs, a HuggingFace dataset, Zenodo, figshare, a Blogger constellation, a LAPIEZA research lab. Platform Count measures this. A field on one platform is fragile. A field distributed across a mesh is not. Span — how far the field reaches across intellectual territory. Bibliographic Fields Touched measures the range of disciplines the corpus engages: architecture, sociology, systems theory, epistemology, urban studies, conceptual art, linguistics, media theory, ecology, philosophy of science. Not dabbling — each is structurally integrated. Authored Work Count closes the loop: how many distinct authored works, not nodes, but finished intellectual objects, have been produced and can be pointed at. These five properties are what separates a field from a project. A project can be brilliant and dense and even well-distributed. A field is all of that plus a structure capable of operating autonomously — capable of being entered, navigated, extended, and cited by others who arrive without the founder's map.


The Keathley-Herring Comparison (Briefly)

There is one existing framework worth comparing: the Keathley-Herring and Van Aken maturity assessment, published in Scientometrics in 2016, which has become the standard scientometric tool for measuring research field maturity. Their five clusters — publication characteristics, content characteristics, authorship characteristics, research design characteristics, impact characteristics — overlap with PlasticScale on three dimensions: volume, cross-disciplinarity, and fixation. Where PlasticScale diverges is structural. Keathley-Herring was designed for fields that already operate through journals and institutional departments. It assumes the scaffold. PlasticScale measures whether the scaffold has been built from scratch. That's the genuine difference — not better, not worse, different use case entirely. What Keathley-Herring has that PlasticScale doesn't: authorship diversity and collaboration networks. Those signals matter. They are also later signals. They are what a field produces after it has achieved internal completion and started attracting other researchers into its orbit. Socioplastics is in the phase before that. Measuring it with collaboration-network tools right now is like measuring a foundation with a tool designed for load-bearing walls. The two instruments should run together. Keathley-Herring will tell us what social uptake looks like when it arrives. PlasticScale tells us what was there before the uptake.


The Self-Audit Handled Directly

We are measuring our own field with an instrument we designed. That is circular and we know it. The circularity is managed by the evidence being external and checkable. Every threshold in PlasticScale was anchored to a published external reference before the audit was run — DataCite's DOI policies, NIH productivity data, cloud architecture depth limits. The thresholds cannot be adjusted retroactively without visible revision. Every score we report is a publicly verifiable number: the DOI count links to Zenodo, the platform count links to live URLs, the node count links to the master index. 

What We Are Actually Doing

We are building a sovereign research infrastructure in real time, in public, without asking permission. The field is called Socioplastics. It operates at the intersection of architecture, urban epistemology, systems theory, and conceptual art. It has a name, a vocabulary, a master index, a distributed platform architecture, persistent identifiers, machine-readable metadata, and 2,300 nodes of intellectual content organized into a three-tome structure that functions as both corpus and argument. It is registered. It is indexable. It is inhabitable. PlasticScale is the instrument we built to demonstrate that this is a field and not a gesture. It doesn't need a journal to validate it. The validation is in the evidence: word counts, node counts, DOI counts, platform counts, hierarchy levels, bibliographic range. All of it public, all of it checkable, all of it there. The infrastructure is ready. The measurement is next.