The measurement of intellectual fields has historically suffered from either excessive reduction or unconstrained proliferation. Bibliometrics, for instance, tends to collapse field legitimacy into a single proxy—citation count, journal impact factor, or h-index—each of which captures only one dimension of epistemic reality while ignoring structure, distribution, and persistence. At the opposite extreme, qualitative field descriptions often invoke dozens of incommensurable criteria, producing rich narratives that resist verification and comparison. PlasticScale occupies the neglected middle: ten metrics, each visible, enumerable, and externally referable. The choice of ten is not arbitrary. It follows the logic of the decalogue as a cognitive closure device—enough dimensions to capture field complexity, few enough to remain memorable and executable. Most practitioners use one, three, or five metrics because those are the thresholds of conventional measurement. PlasticScale argues that ten is the minimum number required to distinguish a field from a project, precisely because a field must demonstrate not only mass but also structure, fixation, distribution, span, and persistence.
The ten metrics are each anchored to external reference points drawn from bibliometric studies, repository platform limits, scientometric frameworks, and research infrastructure deployments. Corpus Word Count (Metric 1) draws on NIH productivity data showing that senior scholars in top research universities produce the equivalent of one million words across their careers. A 2021 study of 1,687 academic neurological surgeons published in the Journal of Neurosurgery found that median total publications for faculty at top ranks reached approximately 139, representing a career output of substantial textual mass . The National Institutes of Health's Relative Citation Ratio framework, which analyzes productivity across academic ranks, provides an evidence base for calibrating the one-million-word threshold as the mark of a mature scholarly corpus. Indexed Entry Count (Metric 2) references enterprise knowledge base limits. Alibaba Cloud's OpenSearch High-performance Search Edition documentation specifies that fields of the TEXT type can be up to 65,536 words in length, with a maximum of 32 TEXT fields per index, establishing an upper boundary for structured knowledge repositories . LivePerson's KnowledgeAI platform, while placing no explicit limit on knowledge base count, imposes source code limits of 100,000 characters per function, providing a practical constraint on entry volume . The 2,000-entry threshold represents a mature corpus exceeding typical repository scales.
Structural Level Count (Metric 3) follows documented hierarchy limits. Alibaba Cloud's search architecture permits up to 10 levels of composite indexes and 8 fields per composite index, indicating that 10 or more hierarchical levels constitute complete scalar architecture . This technical limit from a major cloud provider serves as a defensible reference point for maximum meaningful structural depth. Stratum and Book Count (Metric 4) aligns with academic career data from the NIH neurosurgery study, which demonstrated that advanced academic rank correlates strongly with increased publication volume, with top-ranked faculty producing well beyond 10 books or major strata over their careers . Series and Subfield Count (Metric 5) draws on research infrastructure evaluation frameworks. The European Science Foundation's 2010 guidelines for assessing research impact and productivity emphasize the importance of capturing information about research progress across multiple subfields, with major research programs typically sustaining 10 or more active series . EGI-ACE, a flagship European Open Science Cloud project, delivered 36 free-at-point-of-use compute services across 30 countries, demonstrating that major research infrastructures operate at a scale of dozens of differentiated service lines .
Core Vocabulary Count (Metric 6) is calibrated against philosophical systems. Niklas Luhmann's Theory of Society (1997), the culmination of his thirty-year theoretical project, operates with approximately two hundred core terms across its analysis of communication media, evolution, and social systems . The 100-term threshold represents half that scale, indicating a system with significant conceptual density but not necessarily the full elaboration of a mature philosophical framework. DOI Count (Metric 7) references DataCite's membership fee structure. DataCite's 2024 policy update raised the threshold for additional fees from 10,000 to 100,000 DOIs annually, with 20 Consortium Organizations registering more than 10,000 DOIs in 2023 . The 100-DOI threshold represents systematic fixation at a scale that exceeds occasional deposit and approaches institutional registration practice. Platform Count (Metric 8) draws on research infrastructure federations. EGI-ACE integrated 36 services into the EOSC Portal, comprising 19 Compute Platform services and 17 Thematic Data Spaces, and served nearly 77,000 users across 30+ countries . The 8-platform threshold represents a distributed infrastructure capable of surviving platform failure, calibrated against these large-scale federations.
Bibliographic Fields Touched (Metric 9) follows scientometric standards for cross-disciplinarity measurement. Research by Mutz (2022) in Scientometrics examines the dimensions of diversity and interdisciplinarity, specifically how variety, balance, and disparity should be combined to measure cross-disciplinary research . The 12-discipline threshold for "highly transdisciplinary" aligns with standard scientometric frameworks where 5-6 disciplines constitute interdisciplinary work and 10+ constitute highly transdisciplinary integration. Authored Work Count (Metric 10) references the NIH neurosurgery study data, which documented median total publications for academic neurosurgeons at approximately 139, with the weighted Relative Citation Ratio median at 28.56 . The 80-work threshold represents a career achievement below the NIH median but significantly above early-career productivity, calibrated to distinguish established scholars from emerging researchers.
The verification principle is simple: every metric must be pointable. No ratios, no averages, no hidden calculations. A field declares its word count, its indexed entries, its hierarchy levels, its books, its series, its core vocabulary, its DOIs, its platforms, its bibliographic fields, and its authored works. Each number can be audited against public evidence. This is not a prestige scale. It is a diagnostic instrument. It asks not whether a field is recognized, but whether it has achieved sufficient internal density, structural articulation, fixation, distribution, and span to operate autonomously. Ten metrics are more than one, three, or five because fields are more complex than citations, impact factors, or h-indices. Yet ten are few enough to be deployed, declared, and debated. PlasticScale v1.0 closes at ten. That is the verified decadal protocol.
References
Alibaba Cloud. (2024). Limits - OpenSearch High-performance Search Edition. Alibaba Cloud Documentation. https://www.alibabacloud.com/help/en/open-search/high-performance-searchedition/limits-4
DataCite. (2024). Policy for Consortium Organizations registering more than 10.000 DOIs. DataCite Blog. https://datacite.org/blog/policy-for-consortium-organizations-registering-more-than-10-000-dois/
EGI. (2023). EGI-ACE: Empowering European Open Science. EGI Foundation. https://www.egi.eu/article/egi-ace-empowering-european-open-science/
European Science Foundation. (2010). Guidelines for assessing research impact and productivity. paasp network. https://paasp.net/guidelines-by-the-european-science-foundation-esf/
LivePerson. (2023). KnowledgeAI — FAQs. LivePerson Developer Center. https://developers.liveperson.com/knowledgeai-faqs.html
LivePerson. (2024). LivePerson Functions — Limitations. LivePerson Developer Center. https://developers.liveperson.com/liveperson-functions-foundations-limitations.html
Luhmann, N. (1997). Die Gesellschaft der Gesellschaft. Suhrkamp. English translation: Theory of Society, Volume 1 (R. Barrett, Trans.). Stanford University Press, 2012.
Mutz, R. (2022). Diversity and interdisciplinarity: Should variety, balance and disparity be combined as a product or better as a sum? Scientometrics. https://doi.org/10.1007/s11192-022-04336-3
Reddy, V., Gupta, A., White, M. D., et al. (2021). Assessment of the NIH-supported relative citation ratio as a measure of research productivity among 1687 academic neurological surgeons. Journal of Neurosurgery, 134(2), 638-645. https://doi.org/10.3171/2019.11.jns192679