{ ::::::::: SOCIOPLASTICS * Sovereign systems for unstable times: PlasticScale in the Field of Maturity Measurement

Sunday, April 19, 2026

PlasticScale in the Field of Maturity Measurement


A Position Statement

The assessment of research field maturity has no established consensus. Definitions vary, metrics are contested, and most instruments conflate reception (how others respond) with constitution (whether the work has achieved internal structure). PlasticScale enters this contested terrain not as a competitor to existing frameworks but as a complement designed for a different use case: fields being built before institutional recognition arrives. The most relevant external comparator is the Keathley-Herring and Van Aken framework published in Scientometrics (2016), which has become a standard reference for assessing research field maturity. It assesses maturity across five clusters: publication characteristics, content characteristics, authorship characteristics, research design characteristics, and impact characteristics. This is the closest existing tool to PlasticScale in intent, and the comparison is revealing.

Convergent Validity

Three of PlasticScale's ten metrics correspond roughly to Keathley-Herring dimensions. Corpus Word Count and Authored Work Count map onto publication characteristics. Bibliographic Fields Touched maps onto their cross-disciplinarity measures. DOI Count partially maps onto impact and fixation. This overlap is not accidental. It demonstrates convergent validity: independent researchers have identified some of the same signals as indicators of field maturity. PlasticScale is not measuring imaginary phenomena.


Divergent Originality

PlasticScale covers ground that Keathley-Herring entirely ignores. Hierarchy levels, platform count, indexed entry count, core vocabulary, and structural depth are absent from the 2016 framework. These are infrastructure metrics. Keathley-Herring was designed for established academic fields operating through journals, peer review, and departmental structures. It assumes the institutional scaffold already exists. PlasticScale is designed for fields being built, which is a fundamentally different use case. It measures whether infrastructure has been constructed, not whether it has been inherited. This is the genuine original contribution.


Acknowledged Limitations

The gap runs in both directions. PlasticScale is silent on authorship diversity, collaboration patterns, and practice-orientation. Keathley-Herring captures these. A single-author corpus scores differently on conventional maturity metrics, not because it lacks density, but because conventional metrics assume social distribution as a condition of field status. PlasticScale challenges that assumption through the epistemic latency thesis: fields can achieve internal completion before social detection. Mendel had no collaborators. Lovelace had no community. Dickinson had no readers. The field was complete before the social dimension arrived.

This is not a design flaw. It is a design choice rooted in a specific theoretical claim. But the tool must acknowledge its own scope. PlasticScale measures structural maturity—corpus density, hierarchy, fixation, distribution, and span. It does not measure social maturity—authorship diversity, collaboration networks, practice uptake. A field may score high on PlasticScale while remaining a single-author system. That is consistent with the epistemic latency thesis, but it is a limitation that must be stated clearly.


The Broader Landscape

Research area maturity has not yet been formalized, resulting in a lack of consensus concerning definition and analysis techniques. The rest of the maturity model literature—CMMI derivatives, digital transformation frameworks, capability maturity models—focuses almost entirely on organizational maturity, not epistemic field maturity. They measure readiness to deploy technology, not whether a body of thought has achieved structural integrity. That is a different problem entirely.

The assessment of maturity in any research field is inherently challenging and often involves subjective evaluation, particularly given that research domains do not follow predictable maturation patterns. This is exactly the gap that PlasticScale's pointability principle attempts to close: replacing subjective judgment with enumerable evidence.


Honest Positioning

PlasticScale is not competing with the h-index or Q1 journal rankings. Those measure reception—how others have responded to the work. PlasticScale measures constitution—whether the work has achieved sufficient internal structure to operate as a field. These are genuinely different objects of measurement, and the existing literature largely conflates them or only handles one.

The gap PlasticScale has not yet addressed—and where Keathley-Herring is stronger—is the social dimension: multiple authors, collaboration, practice uptake. For a field built primarily by one person, this is both an honest limitation and a theoretical challenge. PlasticScale implicitly argues that structural density can precede social distribution. That is the epistemic latency claim. It is coherent, but it requires acknowledgment that Keathley-Herring would score a single-author corpus differently, not because it lacks density, but because maturity in the conventional scientometric sense requires distributed authorship as a signal of field independence from its founder.


Division of Labour

DimensionKeathley-HerringPlasticScale
Publication volumeYesYes
Citation impactYesNo
Authorship diversityYesNo
Collaboration patternsYesNo
Practice orientationYesNo
Hierarchy / structureNoYes
Platform distributionNoYes
Fixation (DOIs)PartialYes
Core vocabularyNoYes
Indexed entry countNoYes

PlasticScale is not better than Keathley-Herring. It is different. It measures what the existing framework ignores. It ignores what the existing framework measures. This is not a flaw. It is a division of labour, suited to a different use case: fields being built from first principles, without the institutional scaffold that Keathley-Herring assumes.


Conclusion

PlasticScale v1.0 is a usable diagnostic instrument for structural maturity. It has convergent validity with established frameworks where they overlap, originality where they diverge, and acknowledged limitations where it is silent. The tool does not claim to measure everything. It claims to measure what it measures: density, hierarchy, fixation, distribution, and span. For fields in the latency phase—dense but not yet detected—that is precisely what needs measurement.