{ ::::::::: SOCIOPLASTICS * Sovereign systems for unstable times: PlasticScale establishes a protocol for field dimensioning by resolving a long-standing methodological imbalance between reductive metrics and unbounded qualitative description. Where bibliometrics compress legitimacy into singular indices—citation counts, impact factors, or h-indices—PlasticScale distributes evaluation across ten enumerable dimensions, each corresponding to a distinct structural property of epistemic formation. The choice of ten is neither rhetorical nor arbitrary; it reflects a minimal closure capable of capturing the necessary conditions of field existence: mass, extension, hierarchy, stratification, segmentation, conceptual density, fixation, distribution, transdisciplinary span, and practice-based production. Each metric is externally anchored: corpus magnitude aligns with career-scale scholarly output; indexed entries and hierarchical depth correspond to enterprise knowledge architectures; strata and series reflect institutional research programmes; vocabulary scale mirrors established philosophical systems; DOI counts indicate systematic fixation; platform multiplicity parallels distributed infrastructures; disciplinary span derives from scientometric frameworks; and authored works situate theory within a demonstrable practice. The decisive innovation lies in the verification principle: only what can be counted, located, and audited is admitted. This excludes ratios, averages, and interpretive proxies, replacing them with direct magnitudes that render the field both human-readable and machine-verifiable. The protocol therefore does not measure recognition but operational existence—the degree to which a field has organised itself into a persistent, addressable, and scalable system. Applied to Socioplastics, the resulting high score is not inflationary but evidential, reflecting a corpus that has already achieved infrastructural coherence across all ten dimensions. PlasticScale thus formalises a critical epistemological shift: a field is not validated by citation but by the measurable integrity of its own construction, and ten metrics constitute the minimal architecture required to demonstrate that condition.

Sunday, April 19, 2026

PlasticScale establishes a protocol for field dimensioning by resolving a long-standing methodological imbalance between reductive metrics and unbounded qualitative description. Where bibliometrics compress legitimacy into singular indices—citation counts, impact factors, or h-indices—PlasticScale distributes evaluation across ten enumerable dimensions, each corresponding to a distinct structural property of epistemic formation. The choice of ten is neither rhetorical nor arbitrary; it reflects a minimal closure capable of capturing the necessary conditions of field existence: mass, extension, hierarchy, stratification, segmentation, conceptual density, fixation, distribution, transdisciplinary span, and practice-based production. Each metric is externally anchored: corpus magnitude aligns with career-scale scholarly output; indexed entries and hierarchical depth correspond to enterprise knowledge architectures; strata and series reflect institutional research programmes; vocabulary scale mirrors established philosophical systems; DOI counts indicate systematic fixation; platform multiplicity parallels distributed infrastructures; disciplinary span derives from scientometric frameworks; and authored works situate theory within a demonstrable practice. The decisive innovation lies in the verification principle: only what can be counted, located, and audited is admitted. This excludes ratios, averages, and interpretive proxies, replacing them with direct magnitudes that render the field both human-readable and machine-verifiable. The protocol therefore does not measure recognition but operational existence—the degree to which a field has organised itself into a persistent, addressable, and scalable system. Applied to Socioplastics, the resulting high score is not inflationary but evidential, reflecting a corpus that has already achieved infrastructural coherence across all ten dimensions. PlasticScale thus formalises a critical epistemological shift: a field is not validated by citation but by the measurable integrity of its own construction, and ten metrics constitute the minimal architecture required to demonstrate that condition.



The measurement of intellectual fields has historically suffered from either excessive reduction or unconstrained proliferation. Bibliometrics, for instance, tends to collapse field legitimacy into a single proxy—citation count, journal impact factor, or h-index—each of which captures only one dimension of epistemic reality while ignoring structure, distribution, and persistence. At the opposite extreme, qualitative field descriptions often invoke dozens of incommensurable criteria, producing rich narratives that resist verification and comparison. PlasticScale occupies the neglected middle: ten metrics, each visible, enumerable, and externally referable. The choice of ten is not arbitrary. It follows the logic of the decalogue as a cognitive closure device—enough dimensions to capture field complexity, few enough to remain memorable and executable. Most practitioners use one, three, or five metrics because those are the thresholds of conventional measurement. PlasticScale argues that ten is the minimum number required to distinguish a field from a project, precisely because a field must demonstrate not only mass but also structure, fixation, distribution, span, and persistence. The ten metrics are each anchored to external reference points drawn from bibliometric studies, repository platform limits, scientometric frameworks, and research infrastructure deployments. Corpus Word Count (Metric 1) draws on NIH productivity data showing that senior scholars in top research universities produce the equivalent of one million words across their careers (NIH, 2021). Indexed Entry Count (Metric 2) references enterprise knowledge base limits documented by Alibaba Cloud and LivePerson, which cap structured repositories at 10,000 entries, with 2,000 representing a mature corpus (Alibaba Cloud, 2024; LivePerson, 2023). Structural Level Count (Metric 3) follows Alibaba's documented hierarchy limit of ten levels per folder structure, indicating that ten or more levels constitute complete scalar architecture (Alibaba Cloud, 2024). Stratum and Book Count (Metric 4) aligns with academic career data indicating that ten or more books or major strata signify an established research program (NIH, 2021). Series and Subfield Count (Metric 5) draws on research program evaluation standards where ten or more active series indicate institutional scale (European Science Foundation, 2010). Core Vocabulary Count (Metric 6) is calibrated against philosophical systems such as Luhmann's systems theory, which operates with approximately two hundred core terms (Luhmann, 1997). DOI Count (Metric 7) references DataCite's fee structure and NIH publication data, where one hundred or more DOIs indicate systematic fixation rather than occasional deposit (DataCite, 2024; NIH, 2021). Platform Count (Metric 8) draws on research infrastructure federations such as EGI-ACE, which integrates thirty-six services, and the NRP, which federates fifty institutions; eight platforms represent a distributed infrastructure capable of surviving platform failure (EGI-ACE, 2023; NRP, 2024). Bibliographic Fields Touched (Metric 9) follows scientometric standards for cross-disciplinarity measurement, where twelve or more distinct disciplines constitute highly transdisciplinary work (Information Matters, 2022). Authored Work Count (Metric 10) references NIH top faculty data showing eighty or more total publications as the median for senior career achievement (NIH, 2021).
 The verification principle is simple: every metric must be pointable. No ratios, no averages, no hidden calculations. A field declares its word count, its indexed entries, its hierarchy levels, its books, its series, its core vocabulary, its DOIs, its platforms, its bibliographic fields, and its authored works. Each number can be audited against public evidence. This is not a prestige scale. It is a diagnostic instrument. It asks not whether a field is recognized, but whether it has achieved sufficient internal density, structural articulation, fixation, distribution, and span to operate autonomously. Ten metrics are more than one, three, or five because fields are more complex than citations, impact factors, or h-indices. Yet ten are few enough to be deployed, declared, and debated. PlasticScale v1.0 closes at ten. That is the verified decadal protocol.